| Non-Rationalised NCERT Books Solution | ||||||
|---|---|---|---|---|---|---|
| 6th | 7th | 8th | 9th | 10th | 11th | 12th |
Chapter 4 Determinants
This solutions guide illuminates Chapter 4: Determinants, a critical concept intrinsically linked to square matrices. While a matrix itself is an array of numbers, its determinant is a unique scalar value computed from its elements according to specific rules. Determinants encapsulate important algebraic and geometric properties of the corresponding matrix and the linear transformation it represents. They serve as powerful tools in linear algebra for solving systems of linear equations, finding the inverse of a matrix, calculating areas and volumes in coordinate geometry, and analyzing vector spaces. This chapter focuses on the methods for calculating determinants, understanding their fundamental properties, and applying them to solve various mathematical problems.
The solutions begin by defining the determinant for square matrices of different orders. For a $1 \times 1$ matrix $[a]$, the determinant is simply $a$. For a $2 \times 2$ matrix $\begin{bmatrix} a & b \\ c & d \end{bmatrix}$, the determinant is calculated as $\det(A) = ad - bc$. For matrices of order $3 \times 3$ and higher, the calculation typically involves a method called expansion by minors and cofactors along any chosen row or column. The solutions meticulously explain how to find the Minor ($M_{ij}$) of an element $a_{ij}$ (the determinant of the submatrix obtained by deleting the $i$-th row and $j$-th column) and the corresponding Cofactor ($C_{ij}$), which is the minor multiplied by an appropriate sign factor: $C_{ij} = (-1)^{i+j}M_{ij}$. The determinant is then the sum of the products of the elements of any row (or column) with their corresponding cofactors (e.g., $\det(A) = a_{11}C_{11} + a_{12}C_{12} + a_{13}C_{13}$ for expansion along the first row of a $3 \times 3$ matrix).
A significant emphasis is placed on understanding and utilizing the numerous properties of determinants, as these often drastically simplify calculations. Key properties demonstrated in the solutions include:
- The determinant remains unchanged if its rows and columns are interchanged ($\det(A) = \det(A^T)$).
- If any two rows (or columns) are interchanged, the sign of the determinant changes.
- If any two rows (or columns) are identical or proportional, the determinant is zero.
- If each element of a row (or column) is multiplied by a constant $k$, the determinant gets multiplied by $k$.
- The crucial property that adding a multiple of one row (or column) to another row (or column) (e.g., applying $R_i \rightarrow R_i + kR_j$ or $C_i \rightarrow C_i + kC_j$) does not change the value of the determinant. This property is extensively used to introduce zeros into a row or column, simplifying subsequent expansion.
- $\det(AB) = \det(A)\det(B)$ for square matrices A and B of the same order.
The solutions showcase how these properties allow for the evaluation of complex determinants often without needing full expansion. Applications of determinants are then explored. One key geometric application is finding the Area of a Triangle with vertices $(x_1, y_1)$, $(x_2, y_2)$, $(x_3, y_3)$, given by the formula $\text{Area} = \frac{1}{2} |x_1(y_2-y_3) + x_2(y_3-y_1) + x_3(y_1-y_2)|$, which can be expressed using a determinant. This also provides a method to check for the collinearity of three points (the area will be zero). Another vital application involves finding the inverse of a square matrix. This first requires calculating the adjoint of the matrix, denoted $\text{adj } A$, which is the transpose of the matrix of cofactors. The solutions demonstrate finding the adjoint and verifying the fundamental relationship $A(\text{adj } A) = (\text{adj } A)A = (\det A)I$, where $I$ is the identity matrix. This relationship directly yields the formula for the inverse of $A$ (if it exists): $A^{-1} = \frac{1}{\det A} (\text{adj } A)$. This formula exists only if $\det A \neq 0$. Matrices with $\det A = 0$ are called singular (non-invertible), while those with $\det A \neq 0$ are non-singular (invertible).
Finally, a major practical application is presented: solving systems of linear equations (e.g., $a_1x+b_1y+c_1z=d_1$, etc.) using the matrix method. The system is written in matrix form $AX = B$, where $A$ is the coefficient matrix, $X$ is the column matrix of variables, and $B$ is the column matrix of constants. If $A$ is non-singular, the unique solution is given by $X = A^{-1}B$. The solutions demonstrate setting up the matrices, finding $A^{-1}$ using the adjoint method, performing the matrix multiplication $A^{-1}B$ to find the values of the variables, and discuss checking the consistency of the system based on the values of $\det(A)$ and $(\text{adj } A)B$.
Example 1 to 5 (Before Exercise 4.1)
Example 1: Evaluate $\begin{vmatrix} 2&4\\−1&2 \end{vmatrix}$ .
Answer:
Given determinant is:
$\begin{vmatrix} 2&4\\−1&2 \end{vmatrix}$
To evaluate a 2x2 determinant $\begin{vmatrix} a&b\\c&d \end{vmatrix}$, we use the formula: $ad - bc$.
In this case, $a=2$, $b=4$, $c=-1$, and $d=2$.
So, the determinant is calculated as:
$(2)(2) - (4)(-1)$
$= 4 - (-4)$
$= 4 + 4$
$= 8$
The value of the determinant is:
$\begin{vmatrix} 2&4\\−1&2 \end{vmatrix} = 8$
Example 2: Evaluate $\begin{vmatrix} x&x+1\\x−1&x \end{vmatrix}$ .
Answer:
Given determinant is:
$\begin{vmatrix} x&x+1\\x−1&x \end{vmatrix}$
To evaluate a 2x2 determinant $\begin{vmatrix} a&b\\c&d \end{vmatrix}$, we use the formula: $ad - bc$.
In this case, $a=x$, $b=x+1$, $c=x-1$, and $d=x$.
So, the determinant is calculated as:
$(x)(x) - (x+1)(x-1)$
Using the algebraic identity $(a+b)(a-b) = a^2 - b^2$:
$= x^2 - ((x)^2 - (1)^2)$
$= x^2 - (x^2 - 1)$
$= x^2 - x^2 + 1$
$= 1$
The value of the determinant is:
$\begin{vmatrix} x&x+1\\x−1&x \end{vmatrix} = 1$
Example 3: Evaluate the determinant ∆ = $\begin{vmatrix} 1&2&4\\−1&3&0\\4&1&0 \end{vmatrix}$ .
Answer:
Given determinant ∆ is:
∆ = $\begin{vmatrix} 1&2&4\\−1&3&0\\4&1&0 \end{vmatrix}$
To evaluate a 3x3 determinant, we can expand it along any row or column. Expanding along a column with zeros will simplify the calculation. Let's expand along the third column (C3).
The formula for expansion along the j-th column is:
∆ = $\sum\limits_{i=1}^{3} (-1)^{i+j} a_{ij} M_{ij}$
Expanding along C3 (where $j=3$):
∆ = $(-1)^{1+3} a_{13} M_{13} + (-1)^{2+3} a_{23} M_{23} + (-1)^{3+3} a_{33} M_{33}$
Here, $a_{13} = 4$, $a_{23} = 0$, $a_{33} = 0$.
$M_{13}$ is the minor obtained by deleting the 1st row and 3rd column:
$M_{13} = \begin{vmatrix} −1&3\\4&1 \end{vmatrix} = (-1)(1) - (3)(4) = -1 - 12 = -13$
Since $a_{23}=0$ and $a_{33}=0$, the terms $(-1)^{2+3} a_{23} M_{23}$ and $(-1)^{3+3} a_{33} M_{33}$ will be zero.
So, ∆ = $(+1) (4) M_{13} + (-1) (0) M_{23} + (+1) (0) M_{33}$
∆ = $4 \times (-13) + 0 + 0$
∆ = $-52$
The value of the determinant is:
∆ = $-52$
Example 4: Evaluate ∆ = $\begin{vmatrix} 0& \sinα & -\cosα \\ −\sinα&0&\sinβ \\ \cosα&-\sinβ&0 \end{vmatrix}$ .
Answer:
Given determinant ∆ is:
∆ = $\begin{vmatrix} 0& \sinα & -\cosα \\ −\sinα&0&\sinβ \\ \cosα&-\sinβ&0 \end{vmatrix}$
To evaluate a 3x3 determinant, we can expand it along any row or column. Let's expand along the first row (R1).
The formula for expansion along the first row is:
∆ = $a_{11}C_{11} + a_{12}C_{12} + a_{13}C_{13}$
Where $C_{ij} = (-1)^{i+j} M_{ij}$ is the cofactor and $M_{ij}$ is the minor.
Here, $a_{11} = 0$, $a_{12} = \sinα$, $a_{13} = -\cosα$.
$C_{11} = (-1)^{1+1} M_{11} = (+1) \begin{vmatrix} 0&\sinβ \\ -\sinβ&0 \end{vmatrix} $$ = (0)(0) - (\sinβ)(-\sinβ) = 0 - (-\sin^2β) = \sin^2β$
$C_{12} = (-1)^{1+2} M_{12} = (-1) \begin{vmatrix} −\sinα&\sinβ \\ \cosα&0 \end{vmatrix} $$ = (-1) ((-\sinα)(0) - (\sinβ)(\cosα)) = (-1) (0 - \sinβ\cosα) $$ = (-1) (-\sinβ\cosα) = \sinβ\cosα$
$C_{13} = (-1)^{1+3} M_{13} = (+1) \begin{vmatrix} −\sinα&0 \\ \cosα&-\sinβ \end{vmatrix} $$ = (+1) ((-\sinα)(-\sinβ) - (0)(\cosα)) $$ = (+1) (\sinα\sinβ - 0) = \sinα\sinβ$
Now substitute these values into the expansion formula:
∆ = $(0) (\sin^2β) + (\sinα) (\sinβ\cosα) + (-\cosα) (\sinα\sinβ)$
∆ = $0 + \sinα\sinβ\cosα - \cosα\sinα\sinβ$
∆ = $\sinα\sinβ\cosα - \sinα\sinβ\cosα$
∆ = $0$
The value of the determinant is:
∆ = $0$
Example 5: Find values of x for which $\begin{vmatrix} 3&x\\x&1 \end{vmatrix}$ = $\begin{vmatrix} 3&2\\4&1 \end{vmatrix}$ .
Answer:
Given that the two determinants are equal:
$\begin{vmatrix} 3&x\\x&1 \end{vmatrix}$ = $\begin{vmatrix} 3&2\\4&1 \end{vmatrix}$
First, evaluate the left determinant $\begin{vmatrix} 3&x\\x&1 \end{vmatrix}$ using the formula $ad-bc$:
$\begin{vmatrix} 3&x\\x&1 \end{vmatrix} = (3)(1) - (x)(x) = 3 - x^2$
Next, evaluate the right determinant $\begin{vmatrix} 3&2\\4&1 \end{vmatrix}$ using the formula $ad-bc$:
$\begin{vmatrix} 3&2\\4&1 \end{vmatrix} = (3)(1) - (2)(4) = 3 - 8 = -5$
Now, set the two determinant values equal to each other as given in the problem:
$3 - x^2 = -5$
Rearrange the equation to solve for $x$:
$x^2 = 3 - (-5)$
$x^2 = 3 + 5$
$x^2 = 8$
Take the square root of both sides:
$x = \pm \sqrt{8}$
Simplify the square root:
$x = \pm \sqrt{4 \times 2}$
$x = \pm 2\sqrt{2}$
The values of x for which the equality holds are:
$x = 2\sqrt{2}$ or $x = -2\sqrt{2}$
Exercise 4.1
Evaluate the determinants in Exercises 1 and 2.
Question 1. $\begin{vmatrix} 2&4\\−5&−1 \end{vmatrix}$
Answer:
Given determinant is:
$\begin{vmatrix} 2&4\\−5&−1 \end{vmatrix}$
To evaluate a 2x2 determinant $\begin{vmatrix} a&b\\c&d \end{vmatrix}$, we use the formula: $ad - bc$.
In this case, $a=2$, $b=4$, $c=-5$, and $d=-1$.
So, the determinant is calculated as:
$(2)(-1) - (4)(-5)$
$= -2 - (-20)$
$= -2 + 20$
$= 18$
The value of the determinant is:
$\begin{vmatrix} 2&4\\−5&−1 \end{vmatrix} = 18$
Question 2.
(i) $\begin{vmatrix} \cosθ&−\sinθ\\\sinθ&\cosθ \end{vmatrix}$
(ii) $\begin{vmatrix} x^2−x+1&x−1\\x+1&x+1 \end{vmatrix}$
Answer:
(i) Given determinant is:
$\begin{vmatrix} \cosθ&−\sinθ\\\sinθ&\cosθ \end{vmatrix}$
To evaluate a 2x2 determinant $\begin{vmatrix} a&b\\c&d \end{vmatrix}$, we use the formula: $ad - bc$.
In this case, $a=\cosθ$, $b=−\sinθ$, $c=\sinθ$, and $d=\cosθ$.
So, the determinant is calculated as:
$(\cosθ)(\cosθ) - (-\sinθ)(\sinθ)$
$= \cos^2θ - (-\sin^2θ)$
$= \cos^2θ + \sin^2θ$
Using the trigonometric identity $\sin^2θ + \cos^2θ = 1$:
$= 1$
The value of the determinant is:
$\begin{vmatrix} \cosθ&−\sinθ\\\sinθ&\cosθ \end{vmatrix} = 1$
(ii) Given determinant is:
$\begin{vmatrix} x^2−x+1&x−1\\x+1&x+1 \end{vmatrix}$
To evaluate a 2x2 determinant $\begin{vmatrix} a&b\\c&d \end{vmatrix}$, we use the formula: $ad - bc$.
In this case, $a=x^2−x+1$, $b=x−1$, $c=x+1$, and $d=x+1$.
So, the determinant is calculated as:
$(x^2−x+1)(x+1) - (x−1)(x+1)$
Use the sum of cubes identity $(a^2-ab+b^2)(a+b) = a^3+b^3$ for the first term, with $a=x$ and $b=1$. Or simply expand it.
$(x^2(x+1) - x(x+1) + 1(x+1)) - (x^2 - 1^2)$
$(x^3 + x^2 - x^2 - x + x + 1) - (x^2 - 1)$
$(x^3 + 1) - (x^2 - 1)$
$= x^3 + 1 - x^2 + 1$
$= x^3 - x^2 + 2$
The value of the determinant is:
$\begin{vmatrix} x^2−x+1&x−1\\x+1&x+1 \end{vmatrix} = x^3 - x^2 + 2$
Question 3. If A = $\begin{bmatrix} 1&2\\4&2 \end{bmatrix}$ , then show that | 2A | = 4 | A |
Answer:
Given:
The matrix A is given as:
$A = \begin{bmatrix} 1 & 2 \\ 4 & 2 \end{bmatrix}$
To Show:
We need to show that $|2A| = 4|A|$.
Solution:
First, we will calculate the determinant of the given matrix A.
$|A| = \det(A) = \det\begin{bmatrix} 1 & 2 \\ 4 & 2 \end{bmatrix}$
The determinant of a $2 \times 2$ matrix $\begin{bmatrix} a & b \\ c & d \end{bmatrix}$ is given by $ad - bc$.
Using this formula for matrix A:
$|A| = (1 \times 2) - (2 \times 4)$
$|A| = 2 - 8$
$|A| = -6$
Next, we will find the matrix 2A by multiplying each element of matrix A by the scalar 2.
$2A = 2 \times \begin{bmatrix} 1 & 2 \\ 4 & 2 \end{bmatrix}$
$2A = \begin{bmatrix} 2 \times 1 & 2 \times 2 \\ 2 \times 4 & 2 \times 2 \end{bmatrix}$
$2A = \begin{bmatrix} 2 & 4 \\ 8 & 4 \end{bmatrix}$
Now, we will calculate the determinant of the matrix 2A.
$|2A| = \det(2A) = \det\begin{bmatrix} 2 & 4 \\ 8 & 4 \end{bmatrix}$
Using the determinant formula for a $2 \times 2$ matrix:
$|2A| = (2 \times 4) - (4 \times 8)$
$|2A| = 8 - 32$
$|2A| = -24$
Finally, we will calculate $4|A|$ and compare it with $|2A|$.
We found earlier that $|A| = -6$.
$4|A| = 4 \times (-6)$
$4|A| = -24$
Comparing the values of $|2A|$ and $4|A|$:
We have $|2A| = -24$ and $4|A| = -24$.
Since $|2A| = -24$ and $4|A| = -24$, it follows that $|2A| = 4|A|$.
Thus, it is shown that $|2A| = 4|A|$ for the given matrix A.
Question 4. If A = $\begin{bmatrix} 1&0&1\\0&1&2\\0&0&4 \end{bmatrix}$ , then show that | 3 A | = 27 | A |
Answer:
Given:
The matrix A is given as:
$A = \begin{bmatrix} 1 & 0 & 1 \\ 0 & 1 & 2 \\ 0 & 0 & 4 \end{bmatrix}$
To Show:
We need to show that $|3A| = 27|A|$.
Solution:
First, we will calculate the determinant of the given matrix A.
$|A| = \det(A) = \det\begin{bmatrix} 1 & 0 & 1 \\ 0 & 1 & 2 \\ 0 & 0 & 4 \end{bmatrix}$
Since A is an upper triangular matrix, its determinant is the product of its diagonal elements.
$|A| = 1 \times 1 \times 4$
$|A| = 4$
Next, we will find the matrix 3A by multiplying each element of matrix A by the scalar 3.
$3A = 3 \times \begin{bmatrix} 1 & 0 & 1 \\ 0 & 1 & 2 \\ 0 & 0 & 4 \end{bmatrix}$
$3A = \begin{bmatrix} 3 \times 1 & 3 \times 0 & 3 \times 1 \\ 3 \times 0 & 3 \times 1 & 3 \times 2 \\ 3 \times 0 & 3 \times 0 & 3 \times 4 \end{bmatrix}$
$3A = \begin{bmatrix} 3 & 0 & 3 \\ 0 & 3 & 6 \\ 0 & 0 & 12 \end{bmatrix}$
Now, we will calculate the determinant of the matrix 3A.
$|3A| = \det(3A) = \det\begin{bmatrix} 3 & 0 & 3 \\ 0 & 3 & 6 \\ 0 & 0 & 12 \end{bmatrix}$
Since 3A is also an upper triangular matrix, its determinant is the product of its diagonal elements.
$|3A| = 3 \times 3 \times 12$
$|3A| = 9 \times 12$
$|3A| = 108$
Finally, we will calculate $27|A|$ and compare it with $|3A|$.
We found earlier that $|A| = 4$.
$27|A| = 27 \times 4$
$27|A| = 108$
Comparing the values of $|3A|$ and $27|A|$:
We have $|3A| = 108$ and $27|A| = 108$.
Since $|3A| = 108$ and $27|A| = 108$, it follows that $|3A| = 27|A|$.
Thus, it is shown that $|3A| = 27|A|$ for the given matrix A.
This also demonstrates the property that for a $n \times n$ matrix A and a scalar $k$, $|kA| = k^n |A|$. In this case, $n=3$ and $k=3$, so $|3A| = 3^3 |A| = 27|A|$.
Question 5. Evaluate the determinants
(i) $\begin{vmatrix} 3&−1&-2\\0&0&−1\\3&−5&0 \end{vmatrix}$
(ii) $\begin{vmatrix} 3&−4&5\\1&1&−2\\2&3&1 \end{vmatrix}$
(iii) $\begin{vmatrix} 0&1&2\\-1&0&−3\\-2&3&0 \end{vmatrix}$
(iv) $\begin{vmatrix} 2&−1&-2\\0&2&−1\\3&−5&0 \end{vmatrix}$
Answer:
(i) Evaluate $\begin{vmatrix} 3&−1&-2\\0&0&−1\\3&−5&0 \end{vmatrix}$
Solution:
We can evaluate the determinant by expanding along the second row (R2), as it contains two zeros. The signs for the terms in the cofactor expansion along R2 are $-, +, -$.
$\begin{vmatrix} 3 & −1 & -2 \\ 0 & 0 & −1 \\ 3 & −5 & 0 \end{vmatrix} = 0 \cdot C_{21} + 0 \cdot C_{22} + (-1) \cdot C_{23}$
Here, $C_{ij} = (-1)^{i+j} M_{ij}$, where $M_{ij}$ is the minor obtained by deleting the i-th row and j-th column.
We only need to calculate $C_{23}$. The element is $-1$ at position (2, 3).
$C_{23} = (-1)^{2+3} M_{23} = (-1)^5 M_{23} = -M_{23}$
$M_{23}$ is the determinant of the matrix obtained by removing row 2 and column 3:
$M_{23} = \det\begin{bmatrix} 3 & -1 \\ 3 & -5 \end{bmatrix} = (3 \times -5) - (-1 \times 3) = -15 - (-3) $$ = -15 + 3 = -12$
So, $C_{23} = -(-12) = 12$.
Now, substitute this back into the expansion:
$\begin{vmatrix} 3 & −1 & -2 \\ 0 & 0 & −1 \\ 3 & −5 & 0 \end{vmatrix} = 0 + 0 + (-1) \cdot 12 = -12$
The value of the determinant is $-12$.
(ii) Evaluate $\begin{vmatrix} 3&−4&5\\1&1&−2\\2&3&1 \end{vmatrix}$
Solution:
We can evaluate the determinant by expanding along the first row (R1). The signs for the terms are $+, -, +$.
$\begin{vmatrix} 3 & −4 & 5 \\ 1 & 1 & −2 \\ 2 & 3 & 1 \end{vmatrix} = 3 \cdot \det\begin{bmatrix} 1 & -2 \\ 3 & 1 \end{bmatrix} - (-4) \cdot \det\begin{bmatrix} 1 & -2 \\ 2 & 1 \end{bmatrix} $$ + 5 \cdot \det\begin{bmatrix} 1 & 1 \\ 2 & 3 \end{bmatrix}$
Calculate the $2 \times 2$ determinants:
$\det\begin{bmatrix} 1 & -2 \\ 3 & 1 \end{bmatrix} = (1 \times 1) - (-2 \times 3) = 1 - (-6) = 1 + 6 = 7$
$\det\begin{bmatrix} 1 & -2 \\ 2 & 1 \end{bmatrix} = (1 \times 1) - (-2 \times 2) = 1 - (-4) = 1 + 4 = 5$
$\det\begin{bmatrix} 1 & 1 \\ 2 & 3 \end{bmatrix} = (1 \times 3) - (1 \times 2) = 3 - 2 = 1$
Substitute these values back into the expansion:
$\begin{vmatrix} 3 & −4 & 5 \\ 1 & 1 & −2 \\ 2 & 3 & 1 \end{vmatrix} = 3 \cdot (7) - (-4) \cdot (5) + 5 \cdot (1)$
$= 21 + 4 \cdot 5 + 5$
$= 21 + 20 + 5$
$= 46$
The value of the determinant is $46$.
(iii) Evaluate $\begin{vmatrix} 0&1&2\\-1&0&−3\\-2&3&0 \end{vmatrix}$
Solution:
We can evaluate the determinant by expanding along the first row (R1). The signs for the terms are $+, -, +$.
$\begin{vmatrix} 0 & 1 & 2 \\ -1 & 0 & −3 \\ -2 & 3 & 0 \end{vmatrix} = 0 \cdot \det\begin{bmatrix} 0 & -3 \\ 3 & 0 \end{bmatrix} - 1 \cdot \det\begin{bmatrix} -1 & -3 \\ -2 & 0 \end{bmatrix} $$ + 2 \cdot \det\begin{bmatrix} -1 & 0 \\ -2 & 3 \end{bmatrix}$
Calculate the $2 \times 2$ determinants:
$\det\begin{bmatrix} 0 & -3 \\ 3 & 0 \end{bmatrix} = (0 \times 0) - (-3 \times 3) = 0 - (-9) = 9$
$\det\begin{bmatrix} -1 & -3 \\ -2 & 0 \end{bmatrix} = (-1 \times 0) - (-3 \times -2) = 0 - 6 = -6$
$\det\begin{bmatrix} -1 & 0 \\ -2 & 3 \end{bmatrix} = (-1 \times 3) - (0 \times -2) = -3 - 0 = -3$
Substitute these values back into the expansion:
$\begin{vmatrix} 0 & 1 & 2 \\ -1 & 0 & −3 \\ -2 & 3 & 0 \end{vmatrix} = 0 \cdot (9) - 1 \cdot (-6) + 2 \cdot (-3)$
$= 0 + 6 - 6$
$= 0$
The value of the determinant is $0$.
(iv) Evaluate $\begin{vmatrix} 2&−1&-2\\0&2&−1\\3&−5&0 \end{vmatrix}$
Solution:
We can evaluate the determinant by expanding along the first column (C1). The signs for the terms are $+, -, +$.
$\begin{vmatrix} 2 & −1 & -2 \\ 0 & 2 & −1 \\ 3 & −5 & 0 \end{vmatrix} = 2 \cdot \det\begin{bmatrix} 2 & -1 \\ -5 & 0 \end{bmatrix} - 0 \cdot \det\begin{bmatrix} -1 & -2 \\ -5 & 0 \end{bmatrix} $$ + 3 \cdot \det\begin{bmatrix} -1 & -2 \\ 2 & -1 \end{bmatrix}$
Note that the second term is zero because the element is zero.
Calculate the $2 \times 2$ determinants:
$\det\begin{bmatrix} 2 & -1 \\ -5 & 0 \end{bmatrix} = (2 \times 0) - (-1 \times -5) = 0 - 5 = -5$
$\det\begin{bmatrix} -1 & -2 \\ 2 & -1 \end{bmatrix} = (-1 \times -1) - (-2 \times 2) = 1 - (-4) = 1 + 4 = 5$
Substitute these values back into the expansion:
$\begin{vmatrix} 2 & −1 & -2 \\ 0 & 2 & −1 \\ 3 & −5 & 0 \end{vmatrix} = 2 \cdot (-5) - 0 + 3 \cdot (5)$
$= -10 + 15$
$= 5$
The value of the determinant is $5$.
Question 6. If A = $\begin{bmatrix} 1&1&−2\\2&1&−3\\5&4&−9 \end{bmatrix}$ , find |A|
Answer:
Given:
The matrix A is given as:
$A = \begin{bmatrix} 1 & 1 & -2 \\ 2 & 1 & -3 \\ 5 & 4 & -9 \end{bmatrix}$
To Find:
We need to find the determinant of matrix A, denoted as $|A|$.
Solution:
We can evaluate the determinant of the $3 \times 3$ matrix by expanding along the first row (R1). The formula for the determinant expanded along the first row is:
$|A| = a_{11} \cdot C_{11} + a_{12} \cdot C_{12} + a_{13} \cdot C_{13}$
where $a_{ij}$ are the elements of the matrix and $C_{ij}$ are the corresponding cofactors. The cofactors are given by $C_{ij} = (-1)^{i+j} M_{ij}$, where $M_{ij}$ is the minor (determinant of the submatrix obtained by deleting the i-th row and j-th column).
The elements of the first row are $a_{11}=1$, $a_{12}=1$, and $a_{13}=-2$.
Now, we calculate the minors and cofactors:
For $a_{11}=1$:
$M_{11} = \det\begin{bmatrix} 1 & -3 \\ 4 & -9 \end{bmatrix} = (1 \times -9) - (-3 \times 4) = -9 - (-12) $$ = -9 + 12 = 3$
$C_{11} = (-1)^{1+1} M_{11} = (+1) \times 3 = 3$
For $a_{12}=1$:
$M_{12} = \det\begin{bmatrix} 2 & -3 \\ 5 & -9 \end{bmatrix} = (2 \times -9) - (-3 \times 5) = -18 - (-15) $$ = -18 + 15 = -3$
$C_{12} = (-1)^{1+2} M_{12} = (-1) \times (-3) = 3$
For $a_{13}=-2$:
$M_{13} = \det\begin{bmatrix} 2 & 1 \\ 5 & 4 \end{bmatrix} = (2 \times 4) - (1 \times 5) = 8 - 5 = 3$
$C_{13} = (-1)^{1+3} M_{13} = (+1) \times 3 = 3$
Now, substitute these values into the determinant formula:
$|A| = a_{11} \cdot C_{11} + a_{12} \cdot C_{12} + a_{13} \cdot C_{13}$
$|A| = 1 \cdot (3) + 1 \cdot (3) + (-2) \cdot (3)$
$|A| = 3 + 3 - 6$
$|A| = 6 - 6$
$|A| = 0$
The value of the determinant of matrix A is $0$.
Question 7. Find values of x, if
(i) $\begin{vmatrix} 2&4\\5&1 \end{vmatrix}$ = $\begin{vmatrix} 2x&4\\6&x \end{vmatrix}$
(ii) $\begin{vmatrix} 2&3\\4&5 \end{vmatrix}$ = $\begin{vmatrix} x&3\\2x&5 \end{vmatrix}$
Answer:
(i) Find x if $\begin{vmatrix} 2&4\\5&1 \end{vmatrix}$ = $\begin{vmatrix} 2x&4\\6&x \end{vmatrix}$
Solution:
We need to evaluate the determinants on both sides of the equation and then solve for $x$.
The determinant of a $2 \times 2$ matrix $\begin{bmatrix} a & b \\ c & d \end{bmatrix}$ is given by $ad - bc$.
Left side determinant:
$\begin{vmatrix} 2&4\\5&1 \end{vmatrix} = (2 \times 1) - (4 \times 5)$
$= 2 - 20$
$= -18$
Right side determinant:
$\begin{vmatrix} 2x&4\\6&x \end{vmatrix} = (2x \times x) - (4 \times 6)$
$= 2x^2 - 24$
Equating the two determinants:
$-18 = 2x^2 - 24$
Now, solve the equation for $x$:
$2x^2 - 24 = -18$
$2x^2 = -18 + 24$
$2x^2 = 6$
$x^2 = \frac{6}{2}$
$x^2 = 3$
Taking the square root of both sides:
$x = \pm\sqrt{3}$
The values of $x$ are $\sqrt{3}$ and $-\sqrt{3}$.
(ii) Find x if $\begin{vmatrix} 2&3\\4&5 \end{vmatrix}$ = $\begin{vmatrix} x&3\\2x&5 \end{vmatrix}$
Solution:
We need to evaluate the determinants on both sides of the equation and then solve for $x$.
The determinant of a $2 \times 2$ matrix $\begin{bmatrix} a & b \\ c & d \end{bmatrix}$ is given by $ad - bc$.
Left side determinant:
$\begin{vmatrix} 2&3\\4&5 \end{vmatrix} = (2 \times 5) - (3 \times 4)$
$= 10 - 12$
$= -2$
Right side determinant:
$\begin{vmatrix} x&3\\2x&5 \end{vmatrix} = (x \times 5) - (3 \times 2x)$
$= 5x - 6x$
$= -x$
Equating the two determinants:
$-2 = -x$
Multiplying both sides by -1:
$x = 2$
The value of $x$ is $2$.
Question 8. If $\begin{vmatrix} x&2\\18&x \end{vmatrix}$ = $\begin{vmatrix} 6&2\\18&6 \end{vmatrix}$ , then x is equal to
(A) 6
(B) ± 6
(C) – 6
(D) 0
Answer:
Given:
The equation involving determinants is:
$\begin{vmatrix} x&2\\18&x \end{vmatrix} = \begin{vmatrix} 6&2\\18&6 \end{vmatrix}$
To Find:
We need to find the value(s) of $x$ that satisfy the given equation.
Solution:
We will evaluate the determinant on both sides of the equation.
The determinant of a $2 \times 2$ matrix $\begin{bmatrix} a & b \\ c & d \end{bmatrix}$ is given by $ad - bc$.
Evaluate the determinant on the left side:
$\begin{vmatrix} x&2\\18&x \end{vmatrix} = (x \times x) - (2 \times 18)$
$= x^2 - 36$
Evaluate the determinant on the right side:
$\begin{vmatrix} 6&2\\18&6 \end{vmatrix} = (6 \times 6) - (2 \times 18)$
$= 36 - 36$
$= 0$
Now, equate the two determinant values as given in the problem:
$x^2 - 36 = 0$
Solve the resulting equation for $x$:
$x^2 = 36$
Taking the square root of both sides:
$x = \pm\sqrt{36}$
$x = \pm 6$
The values of $x$ that satisfy the equation are $x = 6$ and $x = -6$.
Comparing this result with the given options, the correct option is (B).
The final answer is $\pm 6$.
Example 6 to 16 (Before Exercise 4.2)
Example 6: Verify Property 1 for ∆ = $\begin{vmatrix} 2&−3&5\\6&0&4\\1&5&−7 \end{vmatrix}$
Answer:
Here is the verification of Property 1 for the given determinant.
Given:
The determinant $\Delta$ is given by:
$\Delta = \begin{vmatrix} 2& -3& 5 \\ 6& 0& 4 \\ 1& 5& -7 \end{vmatrix}$
To Verify:
Property 1 of determinants.
Property 1:
The value of a determinant remains unchanged if its rows and columns are interchanged (i.e., if the transpose of the matrix is taken).
Symbolically, if A is a square matrix, then $\det(A) = \det(A')$, where A' is the transpose of A.
Verification:
First, we calculate the value of the given determinant $\Delta$. We can expand along the first row:
$\det(\Delta) = 2 \begin{vmatrix} 0 & 4 \\ 5 & -7 \end{vmatrix} - (-3) \begin{vmatrix} 6 & 4 \\ 1 & -7 \end{vmatrix} + 5 \begin{vmatrix} 6 & 0 \\ 1 & 5 \end{vmatrix}$
$\det(\Delta) = 2 ((0)(-7) - (4)(5)) + 3 ((6)(-7) - (4)(1)) + 5 ((6)(5) $$ - (0)(1))$
$\det(\Delta) = 2 (0 - 20) + 3 (-42 - 4) + 5 (30 - 0)$
$\det(\Delta) = 2 (-20) + 3 (-46) + 5 (30)$
$\det(\Delta) = -40 - 138 + 150$
$\det(\Delta) = -178 + 150$
$\det(\Delta) = -28$
Next, we find the transpose of the given matrix. The transpose $\Delta'$ is obtained by interchanging the rows and columns of $\Delta$.
$\Delta' = \begin{vmatrix} 2& 6& 1 \\ -3& 0& 5 \\ 5& 4& -7 \end{vmatrix}$
Now, we calculate the value of the determinant of the transpose, $\det(\Delta')$. We can expand along the second row to simplify calculation due to the presence of zero:
$\det(\Delta') = -(-3) \begin{vmatrix} 6 & 1 \\ 4 & -7 \end{vmatrix} + 0 \begin{vmatrix} 2 & 1 \\ 5 & -7 \end{vmatrix} - 5 \begin{vmatrix} 2 & 6 \\ 5 & 4 \end{vmatrix}$
$\det(\Delta') = 3 ((6)(-7) - (1)(4)) + 0 - 5 ((2)(4) - (6)(5))$
$\det(\Delta') = 3 (-42 - 4) - 5 (8 - 30)$
$\det(\Delta') = 3 (-46) - 5 (-22)$
$\det(\Delta') = -138 - (-110)$
$\det(\Delta') = -138 + 110$
$\det(\Delta') = -28$
Conclusion:
We found that the value of the original determinant is $\det(\Delta) = -28$, and the value of the determinant of its transpose is $\det(\Delta') = -28$.
Since $\det(\Delta) = \det(\Delta')$, Property 1 is verified for the given determinant.
Example 7: Verify Property 2 for ∆ = $\begin{vmatrix} 2&−3&5\\6&0&4\\1&5&−7 \end{vmatrix}$ .
Answer:
Here is the verification of Property 2 for the given determinant.
Given:
The determinant $\Delta$ is given by:
$\Delta = \begin{vmatrix} 2& -3& 5 \\ 6& 0& 4 \\ 1& 5& -7 \end{vmatrix}$
To Verify:
Property 2 of determinants.
Property 2:
If any two rows (or columns) of a determinant are interchanged, then the sign of the determinant changes.
Verification:
First, we calculate the value of the original determinant $\Delta$. As calculated in Example 6, the value is:
$\det(\Delta) = -28$
Now, let's interchange R1 and R2 to create a new determinant, say $\Delta_1$.
$\Delta_1 = \begin{vmatrix} 6& 0& 4 \\ 2& -3& 5 \\ 1& 5& -7 \end{vmatrix}$
Next, we calculate the value of the new determinant $\det(\Delta_1)$. We can expand along the first row:
$\det(\Delta_1) = 6 \begin{vmatrix} -3 & 5 \\ 5 & -7 \end{vmatrix} - 0 \begin{vmatrix} 2 & 5 \\ 1 & -7 \end{vmatrix} + 4 \begin{vmatrix} 2 & -3 \\ 1 & 5 \end{vmatrix}$
$\det(\Delta_1) = 6 ((-3)(-7) - (5)(5)) - 0 + 4 ((2)(5) - (-3)(1))$
$\det(\Delta_1) = 6 (21 - 25) + 4 (10 - (-3))$
$\det(\Delta_1) = 6 (-4) + 4 (10 + 3)$
$\det(\Delta_1) = -24 + 4 (13)$
$\det(\Delta_1) = -24 + 52$
$\det(\Delta_1) = 28$
Conclusion:
The value of the original determinant is $\det(\Delta) = -28$.
The value of the determinant after interchanging R1 and R2 is $\det(\Delta_1) = 28$.
We observe that $\det(\Delta_1) = - \det(\Delta)$ ($28 = -(-28)$).
Thus, Property 2 is verified for the given determinant.
Example 8: Evaluate ∆ = $\begin{vmatrix} 3&2&3\\2&2&3\\3&2&3 \end{vmatrix}$
Answer:
Given:
The determinant to be evaluated is ∆ = $\begin{vmatrix} 3&2&3\\2&2&3\\3&2&3 \end{vmatrix}$.
To Find:
The value of the determinant ∆.
Solution (Using Properties of Determinants):
Let the given determinant be ∆.
∆ = $\begin{vmatrix} 3&2&3\\2&2&3\\3&2&3 \end{vmatrix}$
We observe the rows of the determinant. The first row is ($R_1$) = [3 2 3] and the third row is ($R_3$) = [3 2 3].
According to a fundamental property of determinants, if any two rows (or columns) of a determinant are identical, then the value of the determinant is zero.
Since the first row ($R_1$) and the third row ($R_3$) are identical, the value of the determinant is 0.
Therefore, ∆ = 0.
Alternate Solution (By Expansion):
We can also evaluate the determinant by expanding it along the first row ($R_1$).
∆ = $\begin{vmatrix} 3&2&3\\2&2&3\\3&2&3 \end{vmatrix}$
Expanding along $R_1$:
∆ = $3 \begin{vmatrix} 2 & 3 \\ 2 & 3 \end{vmatrix} - 2 \begin{vmatrix} 2 & 3 \\ 3 & 3 \end{vmatrix} + 3 \begin{vmatrix} 2 & 2 \\ 3 & 2 \end{vmatrix}$
Now, we evaluate the 2x2 determinants:
∆ = $3((2)(3) - (3)(2)) - 2((2)(3) - (3)(3)) + 3((2)(2) - (2)(3))$
∆ = $3(6 - 6) - 2(6 - 9) + 3(4 - 6)$
∆ = $3(0) - 2(-3) + 3(-2)$
∆ = $0 + 6 - 6$
∆ = $0$
Hence, the value of the determinant is 0.
Example 9: Evaluate $\begin{vmatrix} 102&18&36\\1&3&4\\17&3&6 \end{vmatrix}$
Answer:
Given:
The determinant ∆ = $\begin{vmatrix} 102&18&36\\1&3&4\\17&3&6 \end{vmatrix}$.
To Find:
The value of the determinant ∆.
Solution (Using Properties of Determinants):
We observe the first row ($R_1$) and the third row ($R_3$).
$R_1 = [102 \quad 18 \quad 36]$
$R_3 = [17 \quad 3 \quad 6]$
We can see that each element of the first row is 6 times the corresponding element of the third row:
$102 = 6 \times 17$
$18 = 6 \times 3$
$36 = 6 \times 6$
This means $R_1 = 6R_3$.
According to a property of determinants, if any two rows (or columns) of a determinant are proportional, then the value of the determinant is zero.
Since $R_1$ and $R_3$ are proportional, the value of the determinant is 0.
Therefore, ∆ = 0.
Alternate Solution (Using Row Operations):
We can apply the row operation $R_1 \rightarrow R_1 - 6R_3$. The value of the determinant remains unchanged.
The new first row becomes:
$[102 - 6(17) \quad 18 - 6(3) \quad 36 - 6(6)] $$ = [102 - 102 \quad 18 - 18 \quad 36 - 36] $$ = [0 \quad 0 \quad 0]$
So, the determinant becomes:
∆ = $\begin{vmatrix} 0&0&0\\1&3&4\\17&3&6 \end{vmatrix}$
According to another property, if all elements of any row (or column) of a determinant are zero, then the value of the determinant is zero.
Therefore, ∆ = 0.
Example 10: Show that $\begin{vmatrix} a&b&c\\a+2x&b+2y&c+2z\\x&y&z \end{vmatrix} = 0$
Answer:
Given:
The determinant ∆ = $\begin{vmatrix} a&b&c\\a+2x&b+2y&c+2z\\x&y&z \end{vmatrix}$.
To Prove:
∆ = 0.
Proof (Using Row Operations):
Let ∆ = $\begin{vmatrix} a&b&c\\a+2x&b+2y&c+2z\\x&y&z \end{vmatrix}$.
We apply the row operation $R_2 \rightarrow R_2 - R_1$. This does not change the value of the determinant.
The new second row becomes:
$[(a+2x)-a \quad (b+2y)-b \quad (c+2z)-c] = [2x \quad 2y \quad 2z]$
The determinant becomes:
∆ = $\begin{vmatrix} a&b&c\\2x&2y&2z\\x&y&z \end{vmatrix}$
From the second row, we can take out a common factor of 2.
∆ = $2 \begin{vmatrix} a&b&c\\x&y&z\\x&y&z \end{vmatrix}$
Now, we observe that the second row ($R_2$) and the third row ($R_3$) are identical.
According to the property of determinants, if any two rows of a determinant are identical, its value is zero.
Therefore, $\begin{vmatrix} a&b&c\\x&y&z\\x&y&z \end{vmatrix} = 0$.
So, ∆ = $2 \times 0 = 0$.
Hence, proved.
Alternate Proof (By Splitting the Determinant):
Using the property that if a row is a sum of terms, the determinant can be split into a sum of determinants:
∆ = $\begin{vmatrix} a&b&c\\a&b&c\\x&y&z \end{vmatrix} + \begin{vmatrix} a&b&c\\2x&2y&2z\\x&y&z \end{vmatrix}$
In the first determinant, $R_1$ and $R_2$ are identical, so its value is 0.
In the second determinant, $R_2$ is proportional to $R_3$ (since $R_2 = 2R_3$), so its value is also 0.
∆ = $0 + 0 = 0$.
Hence, proved.
Example 11: Prove that $\begin{vmatrix} a&a+b&a+b+c\\2a&3a+2b&4a+3b+2c\\3a&6a+3b&10a+6b+3c \end{vmatrix} = a^3$.
Answer:
Given:
The determinant ∆ = $\begin{vmatrix} a&a+b&a+b+c\\2a&3a+2b&4a+3b+2c\\3a&6a+3b&10a+6b+3c \end{vmatrix}$.
To Prove:
∆ = $a^3$.
Proof:
Let ∆ = $\begin{vmatrix} a&a+b&a+b+c\\2a&3a+2b&4a+3b+2c\\3a&6a+3b&10a+6b+3c \end{vmatrix}$.
We apply row operations to simplify the determinant. Apply $R_2 \rightarrow R_2 - 2R_1$.
∆ = $\begin{vmatrix} a&a+b&a+b+c\\0&(3a+2b)-2(a+b)&(4a+3b+2c)-2(a+b+c)\\3a&6a+3b&10a+6b+3c \end{vmatrix}$
∆ = $\begin{vmatrix} a&a+b&a+b+c\\0&a&2a+b\\3a&6a+3b&10a+6b+3c \end{vmatrix}$
Next, apply $R_3 \rightarrow R_3 - 3R_1$.
∆ = $\begin{vmatrix} a&a+b&a+b+c\\0&a&2a+b\\0&(6a+3b)-3(a+b)&(10a+6b+3c)-3(a+b+c) \end{vmatrix}$
∆ = $\begin{vmatrix} a&a+b&a+b+c\\0&a&2a+b\\0&3a&7a+3b \end{vmatrix}$
Now, expand the determinant along the first column (C1), as it contains two zeros, which simplifies the calculation.
∆ = $a \begin{vmatrix} a & 2a+b \\ 3a & 7a+3b \end{vmatrix} - 0 + 0$
∆ = $a [a(7a+3b) - 3a(2a+b)]
∆ = $a [7a^2+3ab - (6a^2+3ab)]
∆ = $a [7a^2+3ab - 6a^2-3ab]
∆ = $a [a^2]$
∆ = $a^3$
Hence, proved.
Example 12: Without expanding, prove that
∆ = $\begin{vmatrix} x+y&y+z&z+x\\z&x&y\\1&1&1 \end{vmatrix} = 0$ .
Answer:
Given:
The determinant ∆ = $\begin{vmatrix} x+y&y+z&z+x\\z&x&y\\1&1&1 \end{vmatrix}$.
To Prove:
∆ = 0, without expanding.
Proof:
We apply the row operation $R_1 \rightarrow R_1 + R_2$. This operation does not change the value of the determinant.
The new first row becomes:
$[(x+y)+z \quad (y+z)+x \quad (z+x)+y] $$ = [x+y+z \quad x+y+z \quad x+y+z]$
The determinant is now:
∆ = $\begin{vmatrix} x+y+z&x+y+z&x+y+z\\z&x&y\\1&1&1 \end{vmatrix}$
We can take out the common factor $(x+y+z)$ from the first row.
∆ = $(x+y+z) \begin{vmatrix} 1&1&1\\z&x&y\\1&1&1 \end{vmatrix}$
In the resulting determinant, the first row ($R_1$) and the third row ($R_3$) are identical.
By the property of determinants, if any two rows of a determinant are identical, its value is zero.
Therefore, $\begin{vmatrix} 1&1&1\\z&x&y\\1&1&1 \end{vmatrix} = 0$.
So, ∆ = $(x+y+z) \times 0 = 0$.
Hence, proved.
Example 13: Evaluate
∆ = $\begin{vmatrix} 1&a&bc\\1&b&ca\\1&c&ab \end{vmatrix}$
Answer:
Solution:
We need to find the value of the given determinant:
∆ = $\begin{vmatrix} 1&a&bc\\1&b&ca\\1&c&ab \end{vmatrix}$
To simplify the determinant, we apply row operations to create zeros in the first column. This makes expansion simpler.
Apply the operations $R_2 \to R_2 - R_1$ and $R_3 \to R_3 - R_1$.
∆ = $\begin{vmatrix} 1 & a & bc \\ 1-1 & b-a & ca-bc \\ 1-1 & c-a & ab-bc \end{vmatrix}$
∆ = $\begin{vmatrix} 1 & a & bc \\ 0 & b-a & c(a-b) \\ 0 & c-a & b(a-c) \end{vmatrix}$
To make the factors common, we can rewrite the terms in the third column:
∆ = $\begin{vmatrix} 1 & a & bc \\ 0 & (b-a) & -c(b-a) \\ 0 & (c-a) & -b(c-a) \end{vmatrix}$
Now, we take out the common factor $(b-a)$ from the second row ($R_2$) and $(c-a)$ from the third row ($R_3$).
∆ = $(b-a)(c-a) \begin{vmatrix} 1 & a & bc \\ 0 & 1 & -c \\ 0 & 1 & -b \end{vmatrix}$
Expanding the determinant along the first column ($C_1$):
∆ = $(b-a)(c-a) \left[ 1 \begin{vmatrix} 1 & -c \\ 1 & -b \end{vmatrix} - 0 + 0 \right]$
∆ = $(b-a)(c-a) [1 \times ((-b) - (-c))]$
∆ = $(b-a)(c-a) (-b+c)$
∆ = $(b-a)(c-a)(c-b)$
To express the result in a cyclic order $(a-b), (b-c), (c-a)$, we rearrange the terms:
∆ = $[-(a-b)] \times [-(a-c)] \times [-(b-c)]$
∆ = $(-1)(a-b) \times (-1)(a-c) \times (-1)(b-c)$
∆ = $-(a-b)(b-c)(a-c)$
∆ = $(a-b)(b-c)(c-a)$
Therefore, the value of the determinant is (a-b)(b-c)(c-a).
Alternate Solution:
Start with the given determinant:
∆ = $\begin{vmatrix} 1&a&a^2\\1&b&b^2\\1&c&c^2 \end{vmatrix}$
This determinant can be related to the Vandermonde determinant. Let's start with the original determinant:
∆ = $\begin{vmatrix} 1&a&bc\\1&b&ca\\1&c&ab \end{vmatrix}$
Apply row operations $R_2 \to R_2 - R_1$ and $R_3 \to R_3 - R_1$.
∆ = $\begin{vmatrix} 1 & a & bc \\ 0 & b-a & ca-bc \\ 0 & c-a & ab-bc \end{vmatrix} = \begin{vmatrix} 1 & a & bc \\ 0 & b-a & -c(b-a) \\ 0 & c-a & -b(c-a) \end{vmatrix}$
Take $(b-a)$ common from $R_2$ and $(c-a)$ from $R_3$.
∆ = $(b-a)(c-a) \begin{vmatrix} 1 & a & bc \\ 0 & 1 & -c \\ 0 & 1 & -b \end{vmatrix}$
Now, apply one more row operation $R_3 \to R_3 - R_2$ to simplify further.
∆ = $(b-a)(c-a) \begin{vmatrix} 1 & a & bc \\ 0 & 1 & -c \\ 0 & 1-1 & -b - (-c) \end{vmatrix}$
∆ = $(b-a)(c-a) \begin{vmatrix} 1 & a & bc \\ 0 & 1 & -c \\ 0 & 0 & c-b \end{vmatrix}$
The determinant of an upper triangular matrix (or lower triangular) is the product of its diagonal elements.
∆ = $(b-a)(c-a) [1 \times 1 \times (c-b)]
∆ = $(b-a)(c-a)(c-b)$
Rearranging into cyclic order:
∆ = $-(a-b) \times (c-a) \times -(b-c)$
∆ = $(a-b)(b-c)(c-a)$
Example 14: Prove that $\begin{vmatrix} b+c&a&a\\b&c+a&b\\c&c&a+b \end{vmatrix} = 4abc$ .
Answer:
Given:
The determinant ∆ = $\begin{vmatrix} b+c&a&a\\b&c+a&b\\c&c&a+b \end{vmatrix}$.
To Prove:
∆ = $4abc$.
Proof:
We apply the row operation $R_1 \rightarrow R_1 - R_2 - R_3$.
New $R_1$ elements:
Col 1: $(b+c) - b - c = 0$
Col 2: $a - (c+a) - c = a - c - a - c = -2c$
Col 3: $a - b - (a+b) = a - b - a - b = -2b$
The determinant becomes:
∆ = $\begin{vmatrix} 0&-2c&-2b\\b&c+a&b\\c&c&a+b \end{vmatrix}$
Now, expand along the first row ($R_1$).
∆ = $0 \cdot \begin{vmatrix} c+a & b \\ c & a+b \end{vmatrix} - (-2c) \cdot \begin{vmatrix} b & b \\ c & a+b \end{vmatrix} + (-2b) \cdot \begin{vmatrix} b & c+a \\ c & c \end{vmatrix}$
∆ = $2c [b(a+b) - bc] - 2b [bc - c(c+a)]
∆ = $2c [ab + b^2 - bc] - 2b [bc - c^2 - ac]
∆ = $(2abc + 2b^2c - 2bc^2) - (2b^2c - 2bc^2 - 2abc)$
∆ = $2abc + 2b^2c - 2bc^2 - 2b^2c + 2bc^2 + 2abc$
Combining like terms:
∆ = $(2abc + 2abc) + (2b^2c - 2b^2c) + (-2bc^2 + 2bc^2)$
∆ = $4abc$
Hence, proved.
Example 15: If x, y, z are different and ∆ = $\begin{vmatrix} x&x^2&1+x^3\\y&y^2&1+y^3\\z&z^2&1+z^3 \end{vmatrix} = 0$ , then show that 1 + xyz = 0
Answer:
Given:
x, y, and z are distinct real numbers (i.e., $x \neq y, y \neq z, z \neq x$).
∆ = $\begin{vmatrix} x&x^2&1+x^3\\y&y^2&1+y^3\\z&z^2&1+z^3 \end{vmatrix} = 0$.
To Prove:
$1 + xyz = 0$.
Proof:
Using the property of determinants, we can split the given determinant into a sum of two determinants:
∆ = $\begin{vmatrix} x&x^2&1\\y&y^2&1\\z&z^2&1 \end{vmatrix} + \begin{vmatrix} x&x^2&x^3\\y&y^2&y^3\\z&z^2&z^3 \end{vmatrix} = 0$
Let the first determinant be $\Delta_1$ and the second be $\Delta_2$.
In $\Delta_2$, we can take out common factors x from $R_1$, y from $R_2$, and z from $R_3$.
$\Delta_2 = xyz \begin{vmatrix} 1&x&x^2\\1&y&y^2\\1&z&z^2 \end{vmatrix}$
Now consider $\Delta_1$. To make it look like the determinant in $\Delta_2$, we perform column interchanges. Interchanging two columns negates the value of the determinant.
$\Delta_1 = \begin{vmatrix} x&x^2&1\\y&y^2&1\\z&z^2&1 \end{vmatrix} \xrightarrow{C_1 \leftrightarrow C_3} -\begin{vmatrix} 1&x^2&x\\1&y^2&y\\1&z^2&z \end{vmatrix} \xrightarrow{C_2 \leftrightarrow C_3} -(-1)\begin{vmatrix} 1&x&x^2\\1&y&y^2\\1&z&z^2 \end{vmatrix}$
So, $\Delta_1 = \begin{vmatrix} 1&x&x^2\\1&y&y^2\\1&z&z^2 \end{vmatrix}$.
The original equation becomes:
$\begin{vmatrix} 1&x&x^2\\1&y&y^2\\1&z&z^2 \end{vmatrix} + xyz \begin{vmatrix} 1&x&x^2\\1&y&y^2\\1&z&z^2 \end{vmatrix} = 0$
Factor out the common determinant:
$(1 + xyz) \begin{vmatrix} 1&x&x^2\\1&y&y^2\\1&z&z^2 \end{vmatrix} = 0$
The determinant $\begin{vmatrix} 1&x&x^2\\1&y&y^2\\1&z&z^2 \end{vmatrix}$ is a standard Vandermonde determinant, and its value is $(x-y)(y-z)(z-x)$.
So, $(1 + xyz)(x-y)(y-z)(z-x) = 0$.
We are given that x, y, and z are different. Therefore:
$(x-y) \neq 0$
$(y-z) \neq 0$
$(z-x) \neq 0$
This implies that their product $(x-y)(y-z)(z-x) \neq 0$.
Since the product of two factors is zero and one of them is non-zero, the other factor must be zero.
Therefore, $1 + xyz = 0$.
Hence, proved.
Example 16: Show that
$\begin{vmatrix} 1+a&1&1\\1&1+b&1\\1&1&1+c \end{vmatrix}$ = $abc \left(1+\frac{1}{a} +\frac{1}{b}+\frac{1}{c} \right)$ = $abc + bc + ca + ab$
Answer:
Given:
The determinant ∆ = $\begin{vmatrix} 1+a&1&1\\1&1+b&1\\1&1&1+c \end{vmatrix}$.
To Prove:
∆ = $abc \left(1+\frac{1}{a} +\frac{1}{b}+\frac{1}{c} \right)$ = $abc + bc + ca + ab$.
Proof:
Let ∆ = $\begin{vmatrix} 1+a&1&1\\1&1+b&1\\1&1&1+c \end{vmatrix}$.
Assuming $a, b, c \neq 0$, take out common factors a, b, c from rows $R_1, R_2, R_3$ respectively.
∆ = $a \begin{vmatrix} 1+\frac{1}{a}&\frac{1}{a}&\frac{1}{a}\\1&1+b&1\\1&1&1+c \end{vmatrix}$
∆ = $ab \begin{vmatrix} 1+\frac{1}{a}&\frac{1}{a}&\frac{1}{a}\\\frac{1}{b}&1+\frac{1}{b}&\frac{1}{b}\\1&1&1+c \end{vmatrix}$
∆ = $abc \begin{vmatrix} 1+\frac{1}{a}&\frac{1}{a}&\frac{1}{a}\\\frac{1}{b}&1+\frac{1}{b}&\frac{1}{b}\\\frac{1}{c}&\frac{1}{c}&1+\frac{1}{c} \end{vmatrix}$
Now, apply the row operation $R_1 \rightarrow R_1 + R_2 + R_3$.
The new first row becomes:
$[ (1+\frac{1}{a}+\frac{1}{b}+\frac{1}{c}) \quad (\frac{1}{a}+1+\frac{1}{b}+\frac{1}{c}) \quad (\frac{1}{a}+\frac{1}{b}+1+\frac{1}{c}) ]$
Let $S = 1+\frac{1}{a}+\frac{1}{b}+\frac{1}{c}$. The determinant is:
∆ = $abc \begin{vmatrix} S&S&S\\\frac{1}{b}&1+\frac{1}{b}&\frac{1}{b}\\\frac{1}{c}&\frac{1}{c}&1+\frac{1}{c} \end{vmatrix}$
Take the common factor S from the first row.
∆ = $abc \cdot S \begin{vmatrix} 1&1&1\\\frac{1}{b}&1+\frac{1}{b}&\frac{1}{b}\\\frac{1}{c}&\frac{1}{c}&1+\frac{1}{c} \end{vmatrix}$
Apply column operations $C_2 \rightarrow C_2 - C_1$ and $C_3 \rightarrow C_3 - C_1$.
∆ = $abc \cdot S \begin{vmatrix} 1&0&0\\\frac{1}{b}&(1+\frac{1}{b})-\frac{1}{b}&\frac{1}{b}-\frac{1}{b}\\\frac{1}{c}&\frac{1}{c}-\frac{1}{c}&(1+\frac{1}{c})-\frac{1}{c} \end{vmatrix}$
∆ = $abc \cdot S \begin{vmatrix} 1&0&0\\\frac{1}{b}&1&0\\\frac{1}{c}&0&1 \end{vmatrix}$
The determinant of a lower triangular matrix is the product of its diagonal elements.
So, $\begin{vmatrix} 1&0&0\\\frac{1}{b}&1&0\\\frac{1}{c}&0&1 \end{vmatrix} = 1 \times 1 \times 1 = 1$.
Therefore, ∆ = $abc \cdot S \cdot 1 = abc \cdot S$.
Substituting the value of S back:
∆ = $abc \left(1+\frac{1}{a}+\frac{1}{b}+\frac{1}{c}\right)$.
This proves the first part of the equality.
Now, distribute $abc$ into the parenthesis to prove the second part:
∆ = $abc(1) + abc(\frac{1}{a}) + abc(\frac{1}{b}) + abc(\frac{1}{c})$
∆ = $abc + bc + ac + ab$
So, ∆ = $abc + bc + ca + ab$.
Hence, proved.
Exercise 4.2
Using the property of determinants and without expanding in Exercises 1 to 7, prove that:
Question 1. $\begin{vmatrix} x&a&x+a\\y&b&y+b\\z&c&z+c \end{vmatrix} = 0$
Answer:
To Prove:
$\begin{vmatrix} x&a&x+a\\y&b&y+b\\z&c&z+c \end{vmatrix} = 0$
Proof:
Let the determinant be ∆.
∆ = $\begin{vmatrix} x&a&x+a\\y&b&y+b\\z&c&z+c \end{vmatrix}$
Using the property that a determinant can be split if a column (or row) is a sum of terms:
∆ = $\begin{vmatrix} x&a&x\\y&b&y\\z&c&z \end{vmatrix} + \begin{vmatrix} x&a&a\\y&b&b\\z&c&c \end{vmatrix}$
In the first determinant, the first column (C1) and the third column (C3) are identical. According to the properties of determinants, if two columns are identical, the value of the determinant is zero.
So, $\begin{vmatrix} x&a&x\\y&b&y\\z&c&z \end{vmatrix} = 0$.
In the second determinant, the second column (C2) and the third column (C3) are identical. Therefore, its value is also zero.
So, $\begin{vmatrix} x&a&a\\y&b&b\\z&c&c \end{vmatrix} = 0$.
Thus, ∆ = $0 + 0 = 0$.
Hence, proved.
Alternate Proof (Using Column Operations):
Apply the column operation $C_3 \rightarrow C_3 - C_1 - C_2$. The value of the determinant remains unchanged.
∆ = $\begin{vmatrix} x&a&(x+a)-x-a\\y&b&(y+b)-y-b\\z&c&(z+c)-z-c \end{vmatrix}$
∆ = $\begin{vmatrix} x&a&0\\y&b&0\\z&c&0 \end{vmatrix}$
Since all elements of the third column (C3) are zero, the value of the determinant is zero.
Hence, proved.
Question 2. $\begin{vmatrix} a−b&b−c&c−a\\b−c&c−a&a−b\\c−a&a−b&b−c \end{vmatrix} = 0$
Answer:
To Prove:
$\begin{vmatrix} a−b&b−c&c−a\\b−c&c−a&a−b\\c−a&a−b&b−c \end{vmatrix} = 0$
Proof:
Let the determinant be ∆.
∆ = $\begin{vmatrix} a−b&b−c&c−a\\b−c&c−a&a−b\\c−a&a−b&b−c \end{vmatrix}$
Apply the column operation $C_1 \rightarrow C_1 + C_2 + C_3$. The value of the determinant remains unchanged.
The new first column elements are:
$(a-b) + (b-c) + (c-a) = a-a+b-b+c-c = 0$
$(b-c) + (c-a) + (a-b) = a-a+b-b+c-c = 0$
$(c-a) + (a-b) + (b-c) = a-a+b-b+c-c = 0$
The determinant becomes:
∆ = $\begin{vmatrix} 0&b−c&c−a\\0&c−a&a−b\\0&a−b&b−c \end{vmatrix}$
Since all elements of the first column (C1) are zero, the value of the determinant is zero.
Hence, proved.
Question 3. $\begin{vmatrix} 2&7&65\\3&8&75\\5&9&86 \end{vmatrix} = 0$
Answer:
To Prove:
$\begin{vmatrix} 2&7&65\\3&8&75\\5&9&86 \end{vmatrix} = 0$
Proof:
Let the determinant be ∆.
∆ = $\begin{vmatrix} 2&7&65\\3&8&75\\5&9&86 \end{vmatrix}$
Apply the column operation $C_3 \rightarrow C_3 - 9C_2$.
∆ = $\begin{vmatrix} 2&7&65-9(7)\\3&8&75-9(8)\\5&9&86-9(9) \end{vmatrix}$
∆ = $\begin{vmatrix} 2&7&65-63\\3&8&75-72\\5&9&86-81 \end{vmatrix}$
∆ = $\begin{vmatrix} 2&7&2\\3&8&3\\5&9&5 \end{vmatrix}$
In this determinant, the first column (C1) and the third column (C3) are identical.
According to the properties of determinants, if any two columns are identical, the value of the determinant is zero.
Therefore, ∆ = 0.
Hence, proved.
Question 4. $\begin{vmatrix} 1&bc&a(b+c)\\1&ca&b(c+a)\\1&ab&c(a+b) \end{vmatrix} = 0$
Answer:
To Prove:
$\begin{vmatrix} 1&bc&a(b+c)\\1&ca&b(c+a)\\1&ab&c(a+b) \end{vmatrix} = 0$
Proof:
Let the determinant be ∆.
∆ = $\begin{vmatrix} 1&bc&ab+ac\\1&ca&bc+ab\\1&ab&ca+bc \end{vmatrix}$
Apply the column operation $C_3 \rightarrow C_3 + C_2$.
∆ = $\begin{vmatrix} 1&bc&ab+ac+bc\\1&ca&bc+ab+ca\\1&ab&ca+bc+ab \end{vmatrix}$
Take out the common factor $(ab+bc+ca)$ from the third column (C3).
∆ = $(ab+bc+ca) \begin{vmatrix} 1&bc&1\\1&ca&1\\1&ab&1 \end{vmatrix}$
In the resulting determinant, the first column (C1) and the third column (C3) are identical.
Therefore, the value of this determinant is zero.
∆ = $(ab+bc+ca) \times 0 = 0$.
Hence, proved.
Question 5. $\begin{vmatrix} b+c&q+r&y+z\\c+a&r+p&z+x\\a+b&p+q&x+y \end{vmatrix} = 2 \begin{vmatrix} a&p&x\\b&q&y\\c&r&z \end{vmatrix}$
Answer:
To Prove:
$\begin{vmatrix} b+c&q+r&y+z\\c+a&r+p&z+x\\a+b&p+q&x+y \end{vmatrix} = 2 \begin{vmatrix} a&p&x\\b&q&y\\c&r&z \end{vmatrix}$
Proof:
Let LHS = $\begin{vmatrix} b+c&q+r&y+z\\c+a&r+p&z+x\\a+b&p+q&x+y \end{vmatrix}$
Apply the row operation $R_1 \rightarrow R_1 + R_2 + R_3$.
LHS = $\begin{vmatrix} 2(a+b+c)&2(p+q+r)&2(x+y+z)\\c+a&r+p&z+x\\a+b&p+q&x+y \end{vmatrix}$
Take the common factor 2 from the first row.
LHS = $2 \begin{vmatrix} a+b+c&p+q+r&x+y+z\\c+a&r+p&z+x\\a+b&p+q&x+y \end{vmatrix}$
Apply $R_2 \rightarrow R_2 - R_1$ and $R_3 \rightarrow R_3 - R_1$.
LHS = $2 \begin{vmatrix} a+b+c&p+q+r&x+y+z\\-b&-q&-y\\-c&-r&-z \end{vmatrix}$
Apply $R_1 \rightarrow R_1 + R_2 + R_3$.
LHS = $2 \begin{vmatrix} a&p&x\\-b&-q&-y\\-c&-r&-z \end{vmatrix}$
Take the common factor -1 from $R_2$ and -1 from $R_3$.
LHS = $2(-1)(-1) \begin{vmatrix} a&p&x\\b&q&y\\c&r&z \end{vmatrix}$
LHS = $2 \begin{vmatrix} a&p&x\\b&q&y\\c&r&z \end{vmatrix}$ = RHS
Hence, proved.
Question 6. $\begin{vmatrix} 0&a&−b\\−a&0&−c\\b&c&0 \end{vmatrix} = 0$ .
Answer:
To Prove:
$\begin{vmatrix} 0&a&−b\\−a&0&−c\\b&c&0 \end{vmatrix} = 0$
Proof:
Let ∆ = $\begin{vmatrix} 0&a&−b\\−a&0&−c\\b&c&0 \end{vmatrix}$.
The given matrix is a skew-symmetric matrix of order 3. A matrix A is skew-symmetric if $A' = -A$.
Let $A = \begin{bmatrix} 0&a&−b\\−a&0&−c\\b&c&0 \end{bmatrix}$.
Then $A' = \begin{bmatrix} 0&-a&b\\a&0&c\\-b&-c&0 \end{bmatrix} = -\begin{bmatrix} 0&a&-b\\-a&0&-c\\b&c&0 \end{bmatrix} = -A$.
We know that for any square matrix A, $\det(A') = \det(A)$.
Also, for a matrix of order n, $\det(kA) = k^n \det(A)$.
So, $\det(A') = \det(-A) = (-1)^3 \det(A) = -\det(A)$.
Thus, we have $\det(A) = -\det(A)$.
$2 \det(A) = 0$
$\det(A) = 0$.
Therefore, ∆ = 0.
Hence, proved.
Question 7. $\begin{vmatrix} −a^2&ab&ac\\ba&−b^2&bc\\ca&cb&−c^2 \end{vmatrix} = 4a^2b^2c^2$ .
Answer:
To Prove:
$\begin{vmatrix} −a^2&ab&ac\\ba&−b^2&bc\\ca&cb&−c^2 \end{vmatrix} = 4a^2b^2c^2$
Proof:
Let LHS = $\begin{vmatrix} −a^2&ab&ac\\ba&−b^2&bc\\ca&cb&−c^2 \end{vmatrix}$.
Take common factors a from $R_1$, b from $R_2$, and c from $R_3$.
LHS = $abc \begin{vmatrix} -a&b&c\\a&-b&c\\a&b&-c \end{vmatrix}$
Again, take common factors a from $C_1$, b from $C_2$, and c from $C_3$.
LHS = $abc(abc) \begin{vmatrix} -1&1&1\\1&-1&1\\1&1&-1 \end{vmatrix}$
LHS = $a^2b^2c^2 \begin{vmatrix} -1&1&1\\1&-1&1\\1&1&-1 \end{vmatrix}$
Apply row operations $R_2 \rightarrow R_2 + R_1$ and $R_3 \rightarrow R_3 + R_1$.
LHS = $a^2b^2c^2 \begin{vmatrix} -1&1&1\\0&0&2\\0&2&0 \end{vmatrix}$
Now, expand along the first column (C1).
LHS = $a^2b^2c^2 \left[ -1 \begin{vmatrix} 0 & 2 \\ 2 & 0 \end{vmatrix} - 0 + 0 \right]$
LHS = $a^2b^2c^2 [-1(0 \times 0 - 2 \times 2)]
LHS = $a^2b^2c^2 [-1(-4)]
LHS = $a^2b^2c^2 (4) = 4a^2b^2c^2$ = RHS
Hence, proved.
By using properties of determinants, in Exercises 8 to 14, show that:
Question 8.
(i) $\begin{vmatrix} 1&a&a^2\\1&b&b^2\\1&c&c^2 \end{vmatrix} = (a - b) (b - c) (c - a)$
(ii) $\begin{vmatrix} 1&1&1\\a&b&c\\a^3&b^3&c^3 \end{vmatrix} = (a - b) (b - c) (c - a) (a + b + c)$
Answer:
Part (i)
To Prove: $\begin{vmatrix} 1&a&a^2\\1&b&b^2\\1&c&c^2 \end{vmatrix} = (a - b) (b - c) (c - a)$.
Proof:
Let LHS = $\begin{vmatrix} 1&a&a^2\\1&b&b^2\\1&c&c^2 \end{vmatrix}$.
Apply the row operations $R_2 \rightarrow R_2 - R_1$ and $R_3 \rightarrow R_3 - R_1$.
LHS = $\begin{vmatrix} 1&a&a^2\\0&b-a&b^2-a^2\\0&c-a&c^2-a^2 \end{vmatrix}$
LHS = $\begin{vmatrix} 1&a&a^2\\0&b-a&(b-a)(b+a)\\0&c-a&(c-a)(c+a) \end{vmatrix}$
Take out common factors $(b-a)$ from $R_2$ and $(c-a)$ from $R_3$.
LHS = $(b-a)(c-a) \begin{vmatrix} 1&a&a^2\\0&1&b+a\\0&1&c+a \end{vmatrix}$
Expand the determinant along the first column (C1).
LHS = $(b-a)(c-a) [1 \cdot \begin{vmatrix} 1 & b+a \\ 1 & c+a \end{vmatrix} - 0 + 0]$
LHS = $(b-a)(c-a) [1(c+a) - 1(b+a)]
LHS = $(b-a)(c-a)(c-b)$
Rearranging the terms:
LHS = $-(a-b)(c-a)(-(b-c))$
LHS = $(a-b)(b-c)(c-a)$ = RHS.
Hence, proved.
Part (ii)
To Prove: $\begin{vmatrix} 1&1&1\\a&b&c\\a^3&b^3&c^3 \end{vmatrix} = (a - b) (b - c) (c - a) (a + b + c)$.
Proof:
Let LHS = $\begin{vmatrix} 1&1&1\\a&b&c\\a^3&b^3&c^3 \end{vmatrix}$.
Apply the column operations $C_2 \rightarrow C_2 - C_1$ and $C_3 \rightarrow C_3 - C_1$.
LHS = $\begin{vmatrix} 1&0&0\\a&b-a&c-a\\a^3&b^3-a^3&c^3-a^3 \end{vmatrix}$
LHS = $\begin{vmatrix} 1&0&0\\a&b-a&c-a\\a^3&(b-a)(b^2+ab+a^2)&(c-a)(c^2+ac+a^2) \end{vmatrix}$
Take out common factors $(b-a)$ from $C_2$ and $(c-a)$ from $C_3$.
LHS = $(b-a)(c-a) \begin{vmatrix} 1&0&0\\a&1&1\\a^3&b^2+ab+a^2&c^2+ac+a^2 \end{vmatrix}$
Expand the determinant along the first row (R1).
LHS = $(b-a)(c-a) [1 \cdot \begin{vmatrix} 1 & 1 \\ b^2+ab+a^2 & c^2+ac+a^2 \end{vmatrix} - 0 + 0]$
LHS = $(b-a)(c-a) [(c^2+ac+a^2) - (b^2+ab+a^2)]
LHS = $(b-a)(c-a) [c^2-b^2 + ac-ab]
LHS = $(b-a)(c-a) [(c-b)(c+b) + a(c-b)]
LHS = $(b-a)(c-a)(c-b)(c+b+a)$
Rearranging the terms:
LHS = $-(a-b)(c-a)(-(b-c))(a+b+c)$
LHS = $(a-b)(b-c)(c-a)(a+b+c)$ = RHS.
Hence, proved.
Question 9. $\begin{vmatrix} x&x^2&yz\\y&y^2&zx\\z&z^2&xy \end{vmatrix} = (x – y) (y – z) (z – x) (xy + yz + zx)$
Answer:
Solution:
Let the given determinant be $\Delta$.
$\Delta = \begin{vmatrix} x&x^2&yz\\y&y^2&zx\\z&z^2&xy \end{vmatrix}$
Apply the row transformations $R_1 \to R_1 - R_2$ and $R_2 \to R_2 - R_3$.
$\Delta = \begin{vmatrix} x-y & x^2-y^2 & yz-zx \\ y-z & y^2-z^2 & zx-xy \\ z & z^2 & xy \end{vmatrix}$
Factor out $(x-y)$ from $R_1$ and $(y-z)$ from $R_2$. Note that:
$x^2-y^2 = (x-y)(x+y)$
$yz-zx = z(y-x) = -z(x-y)$
$y^2-z^2 = (y-z)(y+z)$
$zx-xy = x(z-y) = -x(y-z)$
So, taking common factors from $R_1$ and $R_2$, we get:
$\Delta = (x-y)(y-z) \begin{vmatrix} 1 & x+y & -z \\ 1 & y+z & -x \\ z & z^2 & xy \end{vmatrix}$
Now, apply the row transformation $R_1 \to R_1 - R_2$ on the determinant.
$\Delta = (x-y)(y-z) \begin{vmatrix} 1-1 & (x+y)-(y+z) & -z-(-x) \\ 1 & y+z & -x \\ z & z^2 & xy \end{vmatrix}$
$\Delta = (x-y)(y-z) \begin{vmatrix} 0 & x-z & x-z \\ 1 & y+z & -x \\ z & z^2 & xy \end{vmatrix}$
Factor out $(x-z)$ from $R_1$.
$\Delta = (x-y)(y-z)(x-z) \begin{vmatrix} 0 & 1 & 1 \\ 1 & y+z & -x \\ z & z^2 & xy \end{vmatrix}$
Expand the remaining determinant along the first row ($R_1$).
Expanding along $R_1$, we have:
$\begin{vmatrix} 0 & 1 & 1 \\ 1 & y+z & -x \\ z & z^2 & xy \end{vmatrix} = 0 \cdot \begin{vmatrix} y+z & -x \\ z^2 & xy \end{vmatrix} - 1 \cdot \begin{vmatrix} 1 & -x \\ z & xy \end{vmatrix} + 1 \cdot \begin{vmatrix} 1 & y+z \\ z & z^2 \end{vmatrix}$
$= 0 - (1 \cdot xy - (-x) \cdot z) + (1 \cdot z^2 - (y+z) \cdot z)$
$= -(xy + xz) + (z^2 - (yz + z^2))$
$= -xy - xz + z^2 - yz - z^2$
$= -xy - xz - yz$
$= -(xy + yz + zx)$
Substitute this back into the expression for $\Delta$:
$\Delta = (x-y)(y-z)(x-z) \cdot (-(xy + yz + zx))$
We can rewrite $(x-z)$ as $-(z-x)$. So,
$\Delta = (x-y)(y-z) \cdot (-(z-x)) \cdot (-(xy + yz + zx))$
$\Delta = (x-y)(y-z)(z-x)(xy + yz + zx)$
Thus, we have shown that $\begin{vmatrix} x&x^2&yz\\y&y^2&zx\\z&z^2&xy \end{vmatrix} = (x – y) (y – z) (z – x) (xy + yz + zx)$.
Hence Proved.
Question 10.
(i) $\begin{vmatrix} x+4&2x&2x\\2x&x+4&2x\\2x&2x&x+4 \end{vmatrix} = (5x + 4) (4 - x)^2$
(ii) $\begin{vmatrix} y+k&y&y\\y&y+k&y\\y&y&y+k \end{vmatrix} = k^2 (3y + k)$
Answer:
Solution (i):
Let $\Delta_1 = \begin{vmatrix} x+4&2x&2x\\2x&x+4&2x\\2x&2x&x+4 \end{vmatrix}$.
We need to prove that $\Delta_1 = (5x + 4) (4 - x)^2$.
Apply the operation $C_1 \to C_1 + C_2 + C_3$ to the determinant:
$\Delta_1 = \begin{vmatrix} x+4+2x+2x&2x&2x\\2x+x+4+2x&x+4&2x\\2x+2x+x+4&2x&x+4 \end{vmatrix}$
$\Delta_1 = \begin{vmatrix} 5x+4&2x&2x\\5x+4&x+4&2x\\5x+4&2x&x+4 \end{vmatrix}$
Take out the common factor $(5x+4)$ from $C_1$:
$\Delta_1 = (5x+4) \begin{vmatrix} 1&2x&2x\\1&x+4&2x\\1&2x&x+4 \end{vmatrix}$
Apply row operations $R_2 \to R_2 - R_1$ and $R_3 \to R_3 - R_1$ to create zeros in the first column:
$\Delta_1 = (5x+4) \begin{vmatrix} 1 & 2x & 2x \\ 1-1 & (x+4)-2x & 2x-2x \\ 1-1 & 2x-2x & (x+4)-2x \end{vmatrix}$
$\Delta_1 = (5x+4) \begin{vmatrix} 1 & 2x & 2x \\ 0 & 4-x & 0 \\ 0 & 0 & 4-x \end{vmatrix}$
Expand the determinant along the first column ($C_1$). Only the first element contributes since the others are zero:
$\Delta_1 = (5x+4) \cdot 1 \cdot \begin{vmatrix} 4-x & 0 \\ 0 & 4-x \end{vmatrix}$
Evaluate the $2 \times 2$ determinant:
$\begin{vmatrix} 4-x & 0 \\ 0 & 4-x \end{vmatrix} = (4-x)(4-x) - 0 \cdot 0 = (4-x)^2$
Substitute this back into the expression for $\Delta_1$:
$\Delta_1 = (5x+4)(4-x)^2$
This matches the right-hand side of the equation we needed to prove.
Hence Proved (i).
Solution (ii):
Let $\Delta_2 = \begin{vmatrix} y+k&y&y\\y&y+k&y\\y&y&y+k \end{vmatrix}$.
We need to prove that $\Delta_2 = k^2 (3y + k)$.
Apply the operation $C_1 \to C_1 + C_2 + C_3$ to the determinant:
$\Delta_2 = \begin{vmatrix} y+k+y+y&y&y\\y+y+k+y&y+k&y\\y+y+y+k&y&y+k \end{vmatrix}$
$\Delta_2 = \begin{vmatrix} 3y+k&y&y\\3y+k&y+k&y\\3y+k&y&y+k \end{vmatrix}$
Take out the common factor $(3y+k)$ from $C_1$:
$\Delta_2 = (3y+k) \begin{vmatrix} 1&y&y\\1&y+k&y\\1&y&y+k \end{vmatrix}$
Apply row operations $R_2 \to R_2 - R_1$ and $R_3 \to R_3 - R_1$ to create zeros in the first column:
$\Delta_2 = (3y+k) \begin{vmatrix} 1 & y & y \\ 1-1 & (y+k)-y & y-y \\ 1-1 & y-y & (y+k)-y \end{vmatrix}$
$\Delta_2 = (3y+k) \begin{vmatrix} 1 & y & y \\ 0 & k & 0 \\ 0 & 0 & k \end{vmatrix}$
Expand the determinant along the first column ($C_1$). Only the first element contributes since the others are zero:
$\Delta_2 = (3y+k) \cdot 1 \cdot \begin{vmatrix} k & 0 \\ 0 & k \end{vmatrix}$
Evaluate the $2 \times 2$ determinant:
$\begin{vmatrix} k & 0 \\ 0 & k \end{vmatrix} = k \cdot k - 0 \cdot 0 = k^2$
Substitute this back into the expression for $\Delta_2$:
$\Delta_2 = (3y+k) k^2 = k^2 (3y+k)$
This matches the right-hand side of the equation we needed to prove.
Hence Proved (ii).
Question 11.
(i) $\begin{vmatrix} a−b−c&2a&2a\\2b&b−c−a&2b\\2c&2c&c−a−b \end{vmatrix} = (a + b + c)^3$
(ii) $\begin{vmatrix} x+y+2z&x&y\\z&y+z+2x&y\\z&x&z+x+2y \end{vmatrix} = 2(x + y + z)^3$
Answer:
Part (i)
To Prove: $\begin{vmatrix} a−b−c&2a&2a\\2b&b−c−a&2b\\2c&2c&c−a−b \end{vmatrix} = (a + b + c)^3$.
Proof:
Let LHS = $\begin{vmatrix} a−b−c&2a&2a\\2b&b−c−a&2b\\2c&2c&c−a−b \end{vmatrix}$.
Apply $R_1 \rightarrow R_1 + R_2 + R_3$.
LHS = $\begin{vmatrix} a+b+c&a+b+c&a+b+c\\2b&b−c−a&2b\\2c&2c&c−a−b \end{vmatrix}$
Take common factor $(a+b+c)$ from $R_1$.
LHS = $(a+b+c) \begin{vmatrix} 1&1&1\\2b&b−c−a&2b\\2c&2c&c−a−b \end{vmatrix}$
Apply $C_2 \rightarrow C_2 - C_1$ and $C_3 \rightarrow C_3 - C_1$.
LHS = $(a+b+c) \begin{vmatrix} 1&0&0\\2b&-(a+b+c)&0\\2c&0&-(a+b+c) \end{vmatrix}$
Expand along $R_1$.
LHS = $(a+b+c)[1 \cdot ((-(a+b+c))(-(a+b+c)) - 0)] $$ = (a+b+c)(a+b+c)^2 $$ = (a+b+c)^3$ = RHS.
Hence, proved.
Part (ii)
To Prove: $\begin{vmatrix} x+y+2z&x&y\\z&y+z+2x&y\\z&x&z+x+2y \end{vmatrix} = 2(x + y + z)^3$.
Proof:
Let LHS = $\begin{vmatrix} x+y+2z&x&y\\z&y+z+2x&y\\z&x&z+x+2y \end{vmatrix}$.
Apply $C_1 \rightarrow C_1 + C_2 + C_3$.
LHS = $\begin{vmatrix} 2(x+y+z)&x&y\\2(x+y+z)&y+z+2x&y\\2(x+y+z)&x&z+x+2y \end{vmatrix}$
Take common factor $2(x+y+z)$ from $C_1$.
LHS = $2(x+y+z) \begin{vmatrix} 1&x&y\\1&y+z+2x&y\\1&x&z+x+2y \end{vmatrix}$
Apply $R_2 \rightarrow R_2 - R_1$ and $R_3 \rightarrow R_3 - R_1$.
LHS = $2(x+y+z) \begin{vmatrix} 1&x&y\\0&x+y+z&0\\0&0&x+y+z \end{vmatrix}$
Expand along $C_1$.
LHS = $2(x+y+z)[1 \cdot ((x+y+z)(x+y+z) - 0)] $$ = 2(x+y+z)(x+y+z)^2 $$ = 2(x+y+z)^3$ = RHS.
Hence, proved.
Question 12. $\begin{vmatrix} 1&x&x^2\\x^2&1&x\\x&x^2&1 \end{vmatrix} = (1 - x^3)^2$
Answer:
Solution:
Let $\Delta = \begin{vmatrix} 1&x&x^2\\x^2&1&x\\x&x^2&1 \end{vmatrix}$.
We need to prove that $\Delta = (1 - x^3)^2$.
Apply the column operation $C_1 \to C_1 + C_2 + C_3$:
$\Delta = \begin{vmatrix} 1+x+x^2 & x & x^2 \\ x^2+1+x & 1 & x \\ x+x^2+1 & x^2 & 1 \end{vmatrix}$
$\Delta = \begin{vmatrix} 1+x+x^2 & x & x^2 \\ 1+x+x^2 & 1 & x \\ 1+x+x^2 & x^2 & 1 \end{vmatrix}$
Take out the common factor $(1+x+x^2)$ from $C_1$:
$\Delta = (1+x+x^2) \begin{vmatrix} 1&x&x^2\\1&1&x\\1&x^2&1 \end{vmatrix}$
Apply row operations $R_2 \to R_2 - R_1$ and $R_3 \to R_3 - R_1$ to create zeros in the first column:
$\Delta = (1+x+x^2) \begin{vmatrix} 1 & x & x^2 \\ 1-1 & 1-x & x-x^2 \\ 1-1 & x^2-x & 1-x^2 \end{vmatrix}$
$\Delta = (1+x+x^2) \begin{vmatrix} 1 & x & x^2 \\ 0 & 1-x & x(1-x) \\ 0 & x(x-1) & (1-x)(1+x) \end{vmatrix}$
$\Delta = (1+x+x^2) \begin{vmatrix} 1 & x & x^2 \\ 0 & 1-x & x(1-x) \\ 0 & -x(1-x) & (1-x)(1+x) \end{vmatrix}$
Expand the determinant along the first column ($C_1$). Only the element in the first row contributes:
$\Delta = (1+x+x^2) \cdot 1 \cdot \begin{vmatrix} 1-x & x(1-x) \\ -x(1-x) & (1-x)(1+x) \end{vmatrix}$
Evaluate the $2 \times 2$ determinant:
$\begin{vmatrix} 1-x & x(1-x) \\ -x(1-x) & (1-x)(1+x) \end{vmatrix} = (1-x) \cdot (1-x)(1+x) $$ - (x(1-x)) \cdot (-x(1-x))$
$= (1-x)^2(1+x) + x^2(1-x)^2$
$= (1-x)^2 [(1+x) + x^2]$
$= (1-x)^2 (1+x+x^2)$
Substitute this result back into the expression for $\Delta$:
$\Delta = (1+x+x^2) \cdot (1-x)^2 (1+x+x^2)$
$\Delta = (1+x+x^2)^2 (1-x)^2$
$\Delta = [(1+x+x^2)(1-x)]^2$
Recall the factorization for the difference of cubes: $1 - x^3 = (1-x)(1+x+x^2)$.
Using this identity:
$\Delta = [1-x^3]^2$
$\Delta = (1-x^3)^2$
This matches the right-hand side of the equation.
Hence Proved.
Question 13. $\begin{vmatrix} 1+a^2−b^2&2ab&−2b\\2ab&1−a^2+b^2&2a\\2b&−2a&1−a^2−b^2 \end{vmatrix} $$ = (1 + a^2 $$ +b^2)^3$
Answer:
Solution:
Let the given determinant be $D$.
$D = \begin{vmatrix} 1+a^2−b^2&2ab&−2b\\2ab&1−a^2+b^2&2a\\2b&−2a&1−a^2−b^2 \end{vmatrix}$
Apply row operations $R_1 \to R_1 + b R_3$ and $R_2 \to R_2 - a R_3$.
$D = \begin{vmatrix} (1+a^2-b^2) + b(2b) & 2ab + b(-2a) & -2b + b(1-a^2-b^2) \\ 2ab - a(2b) & (1-a^2+b^2) - a(-2a) & 2a - a(1-a^2-b^2) \\ 2b & -2a & 1-a^2-b^2 \end{vmatrix}$
Simplifying the elements:
Row 1:
$(1+a^2-b^2) + 2b^2 = 1+a^2+b^2$
$2ab - 2ab = 0$
$-2b + b - a^2b - b^3 = -b - a^2b - b^3 = -b(1+a^2+b^2)$
Row 2:
$2ab - 2ab = 0$
$(1-a^2+b^2) + 2a^2 = 1+a^2+b^2$
$2a - a + a^3 + ab^2 = a + a^3 + ab^2 = a(1+a^2+b^2)$
The determinant becomes:
$D = \begin{vmatrix} 1+a^2+b^2 & 0 & -b(1+a^2+b^2) \\ 0 & 1+a^2+b^2 & a(1+a^2+b^2) \\ 2b & -2a & 1-a^2-b^2 \end{vmatrix}$
Take $(1+a^2+b^2)$ common from $R_1$ and $R_2$.
$D = (1+a^2+b^2)(1+a^2+b^2) \begin{vmatrix} 1 & 0 & -b \\ 0 & 1 & a \\ 2b & -2a & 1-a^2-b^2 \end{vmatrix}$
$D = (1+a^2+b^2)^2 \begin{vmatrix} 1 & 0 & -b \\ 0 & 1 & a \\ 2b & -2a & 1-a^2-b^2 \end{vmatrix}$
Expand the determinant along the first row ($R_1$).
$D = (1+a^2+b^2)^2 \left[ 1 \cdot \begin{vmatrix} 1 & a \\ -2a & 1-a^2-b^2 \end{vmatrix} - 0 \cdot \begin{vmatrix} 0 & a \\ 2b & 1-a^2-b^2 \end{vmatrix} + (-b) \cdot \begin{vmatrix} 0 & 1 \\ 2b & -2a \end{vmatrix} \right]$
Evaluate the 2x2 determinants:
$\begin{vmatrix} 1 & a \\ -2a & 1-a^2-b^2 \end{vmatrix} = 1(1-a^2-b^2) - a(-2a) $$ = 1-a^2-b^2 + 2a^2 = 1+a^2-b^2$
$\begin{vmatrix} 0 & 1 \\ 2b & -2a \end{vmatrix} = 0(-2a) - 1(2b) = 0 - 2b = -2b$
Substitute these values back into the expansion:
$D = (1+a^2+b^2)^2 \left[ 1 \cdot (1+a^2-b^2) - b \cdot (-2b) \right]$
$D = (1+a^2+b^2)^2 \left[ 1+a^2-b^2 + 2b^2 \right]$
$D = (1+a^2+b^2)^2 \left[ 1+a^2+b^2 \right]$
$D = (1+a^2+b^2)^{2+1}$
$D = (1+a^2+b^2)^3$
Thus, $\begin{vmatrix} 1+a^2−b^2&2ab&−2b\\2ab&1−a^2+b^2&2a\\2b&−2a&1−a^2−b^2 \end{vmatrix} = (1 + a^2 +b^2)^3$.
Hence, Proved.
Question 14. $\begin{vmatrix} a^2+1&ab&ac\\ab&b^2+1&bc\\ca&cb&c^2+1 \end{vmatrix} = 1 + a^2 + b^2 + c^2 $
Answer:
Given:
The determinant $D = \begin{vmatrix} a^2+1&ab&ac\\ab&b^2+1&bc\\ca&cb&c^2+1 \end{vmatrix}$
To Prove:
$D = 1 + a^2 + b^2 + c^2$
Solution:
Let the given determinant be $D$.
$D = \begin{vmatrix} a^2+1&ab&ac\\ab&b^2+1&bc\\ca&cb&c^2+1 \end{vmatrix}$
Multiply $C_1$ by $a$, $C_2$ by $b$, and $C_3$ by $c$. To balance this, divide the determinant by $abc$.
$D = \frac{1}{abc} \begin{vmatrix} a(a^2+1)&b(ab)&c(ac)\\a(ab)&b(b^2+1)&c(bc)\\a(ca)&b(cb)&c(c^2+1) \end{vmatrix}$
$D = \frac{1}{abc} \begin{vmatrix} a^3+a&ab^2&ac^2\\a^2b&b^3+b&bc^2\\a^2c&b^2c&c^3+c \end{vmatrix}$
Now, take $a$ common from $R_1$, $b$ common from $R_2$, and $c$ common from $R_3$.
$D = \frac{abc}{abc} \begin{vmatrix} a^2+1&b^2&c^2\\a^2&b^2+1&c^2\\a^2&b^2&c^2+1 \end{vmatrix}$
$D = \begin{vmatrix} a^2+1&b^2&c^2\\a^2&b^2+1&c^2\\a^2&b^2&c^2+1 \end{vmatrix}$
Apply row operations $R_1 \to R_1 - R_2$ and $R_2 \to R_2 - R_3$.
$D = \begin{vmatrix} (a^2+1) - a^2 & b^2 - (b^2+1) & c^2 - c^2 \\ a^2 - a^2 & (b^2+1) - b^2 & c^2 - (c^2+1) \\ a^2 & b^2 & c^2+1 \end{vmatrix}$
Simplify the elements:
$D = \begin{vmatrix} 1 & -1 & 0 \\ 0 & 1 & -1 \\ a^2 & b^2 & c^2+1 \end{vmatrix}$
Expand the determinant along the first row ($R_1$).
$D = 1 \cdot \begin{vmatrix} 1 & -1 \\ b^2 & c^2+1 \end{vmatrix} - (-1) \cdot \begin{vmatrix} 0 & -1 \\ a^2 & c^2+1 \end{vmatrix} + 0 \cdot \begin{vmatrix} 0 & 1 \\ a^2 & b^2 \end{vmatrix}$
Evaluate the 2x2 determinants:
$\begin{vmatrix} 1 & -1 \\ b^2 & c^2+1 \end{vmatrix} = 1(c^2+1) - (-1)(b^2) = c^2+1+b^2$
$\begin{vmatrix} 0 & -1 \\ a^2 & c^2+1 \end{vmatrix} = 0(c^2+1) - (-1)(a^2) = 0+a^2 = a^2$
$\begin{vmatrix} 0 & 1 \\ a^2 & b^2 \end{vmatrix} = 0(b^2) - 1(a^2) = 0-a^2 = -a^2$ (Note: This term is multiplied by 0 in the expansion, so its value doesn't affect the final result).
Substitute these values back into the expansion:
$D = 1 \cdot (c^2+1+b^2) + 1 \cdot (a^2) + 0 \cdot (-a^2)$
$D = c^2+1+b^2 + a^2 + 0$
$D = 1 + a^2 + b^2 + c^2$
This is the required expression on the right-hand side.
Hence, Proved.
Choose the correct answer in Exercises 15 and 16.
Question 15. Let A be a square matrix of order 3 × 3, then | kA| is equal to
(A) k| A |
(B) k2| A |
(C) k3| A |
(D) 3k | A |
Answer:
Solution:
Let $A$ be a square matrix of order $n \times n$. The property of determinants states that for any scalar $k$, $|kA| = k^n |A|$.
In this question, $A$ is a square matrix of order $3 \times 3$. So, $n = 3$.
Using the property, we have:
$|kA| = k^3 |A|$
Therefore, the correct answer is $k^3 |A|$.
Comparing with the given options:
(A) $k| A |$
(B) $k^2| A |$
(C) $k^3| A |$
(D) $3k | A |$
The correct option is (C).
Question 16. Which of the following is correct
(A) Determinant is a square matrix.
(B) Determinant is a number associated to a matrix.
(C) Determinant is a number associated to a square matrix.
(D) None of these
Answer:
Solution:
Let's examine each option:
(A) Determinant is a square matrix.
This statement is incorrect. A determinant is a scalar value (a number), not a matrix.
(B) Determinant is a number associated to a matrix.
This statement is partially correct, but not precise. Determinants are only defined for square matrices.
(C) Determinant is a number associated to a square matrix.
This statement is correct. The determinant is a unique scalar value calculated from the elements of a square matrix.
(D) None of these
This statement is incorrect because option (C) is correct.
Therefore, the correct statement is that a determinant is a number associated with a square matrix.
The correct option is (C) Determinant is a number associated to a square matrix.
Example 17 & 18 (Before Exercise 4.3)
Example 17: Find the area of the triangle whose vertices are (3, 8), (– 4, 2) and (5, 1).
Answer:
Given:
The vertices of the triangle are $(x_1, y_1) = (3, 8)$, $(x_2, y_2) = (-4, 2)$, and $(x_3, y_3) = (5, 1)$.
To Find:
Area of the triangle.
Solution:
The area of a triangle with vertices $(x_1, y_1)$, $(x_2, y_2)$, and $(x_3, y_3)$ is given by the formula:
$\text{Area} = \frac{1}{2} \left| \begin{vmatrix} x_1 & y_1 & 1 \\ x_2 & y_2 & 1 \\ x_3 & y_3 & 1 \end{vmatrix} \right|$
Substitute the coordinates of the given vertices into the determinant:
$\text{Area} = \frac{1}{2} \left| \begin{vmatrix} 3 & 8 & 1 \\ -4 & 2 & 1 \\ 5 & 1 & 1 \end{vmatrix} \right|$
Evaluate the determinant by expanding along the first row ($R_1$):
Determinant value $= 3 \begin{vmatrix} 2 & 1 \\ 1 & 1 \end{vmatrix} - 8 \begin{vmatrix} -4 & 1 \\ 5 & 1 \end{vmatrix} + 1 \begin{vmatrix} -4 & 2 \\ 5 & 1 \end{vmatrix}$
$= 3((2)(1) - (1)(1)) - 8((-4)(1) - (1)(5)) + 1((-4)(1) - (2)(5))$
$= 3(2 - 1) - 8(-4 - 5) + 1(-4 - 10)$
$= 3(1) - 8(-9) + 1(-14)$
$= 3 + 72 - 14$
$= 75 - 14$
$= 61$
Now, calculate the area using the formula:
$\text{Area} = \frac{1}{2} |61|$
$\text{Area} = \frac{61}{2}$
Since the area must be positive, the area of the triangle is $\frac{61}{2}$ square units.
Example 18: Find the equation of the line joining A(1, 3) and B (0, 0) using determinants and find k if D(k, 0) is a point such that area of triangle ABD is 3 sq units.
Answer:
Given:
Points A(1, 3) and B(0, 0).
Point D(k, 0).
Area of triangle ABD = 3 sq units.
To Find:
The equation of the line joining A and B using determinants.
The value(s) of k such that the area of triangle ABD is 3 sq units.
Solution:
Part 1: Equation of the line joining A(1, 3) and B(0, 0)
Let P(x, y) be any point on the line joining points A(1, 3) and B(0, 0).
For the three points A, B, and P to be collinear, the area of the triangle formed by these points must be zero.
The area of a triangle with vertices $(x_1, y_1)$, $(x_2, y_2)$, and $(x_3, y_3)$ is $\frac{1}{2} \left| \begin{vmatrix} x_1 & y_1 & 1 \\ x_2 & y_2 & 1 \\ x_3 & y_3 & 1 \end{vmatrix} \right|$.
Here, $(x_1, y_1) = (1, 3)$, $(x_2, y_2) = (0, 0)$, and $(x_3, y_3) = (x, y)$.
Setting the area to zero for collinear points:
$\frac{1}{2} \left| \begin{vmatrix} 1 & 3 & 1 \\ 0 & 0 & 1 \\ x & y & 1 \end{vmatrix} \right| = 0$
This implies the determinant value must be zero:
$\begin{vmatrix} 1 & 3 & 1 \\ 0 & 0 & 1 \\ x & y & 1 \end{vmatrix} = 0$
Expand the determinant along the second row ($R_2$), as it contains two zeros:
$0 \cdot \begin{vmatrix} 3 & 1 \\ y & 1 \end{vmatrix} - 0 \cdot \begin{vmatrix} 1 & 1 \\ x & 1 \end{vmatrix} + 1 \cdot \begin{vmatrix} 1 & 3 \\ x & y \end{vmatrix} = 0$
$0 - 0 + (1 \cdot y - 3 \cdot x) = 0$
$y - 3x = 0$
$y = 3x$
or
$3x - y = 0$
Thus, the equation of the line joining A(1, 3) and B(0, 0) is $y = 3x$ or $3x - y = 0$.
Part 2: Find k if Area of triangle ABD = 3 sq units
The vertices of triangle ABD are A(1, 3), B(0, 0), and D(k, 0).
The area of triangle ABD is given as 3 sq units.
Using the determinant formula for the area:
$\text{Area of } \triangle ABD = \frac{1}{2} \left| \begin{vmatrix} x_A & y_A & 1 \\ x_B & y_B & 1 \\ x_D & y_D & 1 \end{vmatrix} \right|$
Substitute the coordinates of A(1, 3), B(0, 0), and D(k, 0):
$3 = \frac{1}{2} \left| \begin{vmatrix} 1 & 3 & 1 \\ 0 & 0 & 1 \\ k & 0 & 1 \end{vmatrix} \right|$
Multiply by 2 on both sides:
$6 = \left| \begin{vmatrix} 1 & 3 & 1 \\ 0 & 0 & 1 \\ k & 0 & 1 \end{vmatrix} \right|$
Evaluate the determinant. Expanding along the second row ($R_2$) is convenient:
Determinant value $= 0 \cdot \begin{vmatrix} 3 & 1 \\ 0 & 1 \end{vmatrix} - 0 \cdot \begin{vmatrix} 1 & 1 \\ k & 1 \end{vmatrix} + 1 \cdot \begin{vmatrix} 1 & 3 \\ k & 0 \end{vmatrix}$
$= 0 - 0 + (1 \cdot 0 - 3 \cdot k)$
$= 0 - 3k$
$= -3k$
So, the equation becomes:
$6 = |-3k|$
The absolute value means we have two possibilities:
Case 1: $-3k = 6$
$k = \frac{6}{-3}$
$k = -2$
Case 2: $-3k = -6$
$k = \frac{-6}{-3}$
$k = 2$
Thus, the possible values of k are 2 and -2.
Exercise 4.3
Question 1. Find area of the triangle with vertices at the point given in each of the following :
(i) (1, 0), (6, 0), (4, 3)
(ii) (2, 7), (1, 1), (10, 8)
(iii) (–2, –3), (3, 2), (–1, –8)
Answer:
The area of a triangle with vertices $(x_1, y_1)$, $(x_2, y_2)$, and $(x_3, y_3)$ is given by the formula:
$\text{Area} = \frac{1}{2} \left| \begin{vmatrix} x_1 & y_1 & 1 \\ x_2 & y_2 & 1 \\ x_3 & y_3 & 1 \end{vmatrix} \right|$
(i) Vertices: (1, 0), (6, 0), (4, 3)
Let $(x_1, y_1) = (1, 0)$, $(x_2, y_2) = (6, 0)$, and $(x_3, y_3) = (4, 3)$.
Area $= \frac{1}{2} \left| \begin{vmatrix} 1 & 0 & 1 \\ 6 & 0 & 1 \\ 4 & 3 & 1 \end{vmatrix} \right|$
Expand the determinant along the second column ($C_2$):
Determinant value $= 0 \cdot (\text{cofactor}) - 0 \cdot (\text{cofactor}) + 3 \cdot \begin{vmatrix} 1 & 1 \\ 6 & 1 \end{vmatrix}$
$= 3 (1 \cdot 1 - 1 \cdot 6)$
$= 3 (1 - 6)$
$= 3 (-5)$
$= -15$
Area $= \frac{1}{2} |-15|$
Area $= \frac{1}{2} \cdot 15$
Area $= \frac{15}{2}$ square units.
(ii) Vertices: (2, 7), (1, 1), (10, 8)
Let $(x_1, y_1) = (2, 7)$, $(x_2, y_2) = (1, 1)$, and $(x_3, y_3) = (10, 8)$.
Area $= \frac{1}{2} \left| \begin{vmatrix} 2 & 7 & 1 \\ 1 & 1 & 1 \\ 10 & 8 & 1 \end{vmatrix} \right|$
Expand the determinant along the first row ($R_1$):
Determinant value $= 2 \begin{vmatrix} 1 & 1 \\ 8 & 1 \end{vmatrix} - 7 \begin{vmatrix} 1 & 1 \\ 10 & 1 \end{vmatrix} + 1 \begin{vmatrix} 1 & 1 \\ 10 & 8 \end{vmatrix}$
$= 2 (1 \cdot 1 - 1 \cdot 8) - 7 (1 \cdot 1 - 1 \cdot 10) + 1 (1 \cdot 8 - 1 \cdot 10)$
$= 2 (1 - 8) - 7 (1 - 10) + 1 (8 - 10)$
$= 2 (-7) - 7 (-9) + 1 (-2)$
$= -14 + 63 - 2$
$= 49 - 2$
$= 47$
Area $= \frac{1}{2} |47|$
Area $= \frac{1}{2} \cdot 47$
Area $= \frac{47}{2}$ square units.
(iii) Vertices: (–2, –3), (3, 2), (–1, –8)
Let $(x_1, y_1) = (-2, -3)$, $(x_2, y_2) = (3, 2)$, and $(x_3, y_3) = (-1, -8)$.
Area $= \frac{1}{2} \left| \begin{vmatrix} -2 & -3 & 1 \\ 3 & 2 & 1 \\ -1 & -8 & 1 \end{vmatrix} \right|$
Expand the determinant along the first row ($R_1$):
Determinant value $= -2 \begin{vmatrix} 2 & 1 \\ -8 & 1 \end{vmatrix} - (-3) \begin{vmatrix} 3 & 1 \\ -1 & 1 \end{vmatrix} + 1 \begin{vmatrix} 3 & 2 \\ -1 & -8 \end{vmatrix}$
$= -2 (2 \cdot 1 - 1 \cdot (-8)) + 3 (3 \cdot 1 - 1 \cdot (-1)) + 1 (3 \cdot (-8) - 2 \cdot (-1))$
$= -2 (2 + 8) + 3 (3 + 1) + 1 (-24 + 2)$
$= -2 (10) + 3 (4) + 1 (-22)$
$= -20 + 12 - 22$
$= -8 - 22$
$= -30$
Area $= \frac{1}{2} |-30|$
Area $= \frac{1}{2} \cdot 30$
Area $= 15$ square units.
Question 2. Show that points
A (a, b + c), B (b, c + a), C (c, a + b) are collinear.
Answer:
Given:
The points are A (a, b + c), B (b, c + a), and C (c, a + b).
To Prove:
The points A, B, and C are collinear.
Solution:
Points A, B, and C are collinear if and only if the area of the triangle formed by these points is zero.
The area of a triangle with vertices $(x_1, y_1)$, $(x_2, y_2)$, and $(x_3, y_3)$ is given by the formula:
$\text{Area} = \frac{1}{2} \left| \begin{vmatrix} x_1 & y_1 & 1 \\ x_2 & y_2 & 1 \\ x_3 & y_3 & 1 \end{vmatrix} \right|$
Here, $(x_1, y_1) = (a, b+c)$, $(x_2, y_2) = (b, c+a)$, and $(x_3, y_3) = (c, a+b)$.
The area of the triangle formed by points A, B, and C is:
Area $= \frac{1}{2} \left| \begin{vmatrix} a & b+c & 1 \\ b & c+a & 1 \\ c & a+b & 1 \end{vmatrix} \right|$
Let's evaluate the determinant:
$D = \begin{vmatrix} a & b+c & 1 \\ b & c+a & 1 \\ c & a+b & 1 \end{vmatrix}$
Apply the column operation $C_1 \to C_1 + C_2$.
$D = \begin{vmatrix} a + (b+c) & b+c & 1 \\ b + (c+a) & c+a & 1 \\ c + (a+b) & a+b & 1 \end{vmatrix}$
$D = \begin{vmatrix} a+b+c & b+c & 1 \\ a+b+c & c+a & 1 \\ a+b+c & a+b & 1 \end{vmatrix}$
Take the common factor $(a+b+c)$ from the first column ($C_1$).
$D = (a+b+c) \begin{vmatrix} 1 & b+c & 1 \\ 1 & c+a & 1 \\ 1 & a+b & 1 \end{vmatrix}$
In the resulting determinant, the first column ($C_1$) and the third column ($C_3$) are identical.
$\begin{vmatrix} 1 & b+c & 1 \\ 1 & c+a & 1 \\ 1 & a+b & 1 \end{vmatrix}$
When two columns (or rows) of a determinant are identical, the value of the determinant is zero.
So, $D = (a+b+c) \cdot 0$
$D = 0$
Now, calculate the area of the triangle:
Area $= \frac{1}{2} |D|$
Area $= \frac{1}{2} |0|$
Area $= 0$
Since the area of the triangle formed by points A, B, and C is 0, the points A, B, and C are collinear.
Hence, Proved.
Question 3. Find values of k if area of triangle is 4 sq. units and vertices are
(i) (k, 0), (4, 0), (0, 2)
(ii) (–2, 0), (0, 4), (0, k)
Answer:
The area of a triangle with vertices $(x_1, y_1)$, $(x_2, y_2)$, and $(x_3, y_3)$ is given by the formula:
$\text{Area} = \frac{1}{2} \left| \begin{vmatrix} x_1 & y_1 & 1 \\ x_2 & y_2 & 1 \\ x_3 & y_3 & 1 \end{vmatrix} \right|$
Given that the area of the triangle is 4 sq. units, the determinant value must satisfy $\frac{1}{2} |\text{Determinant}| = 4$, which means $|\text{Determinant}| = 8$. Therefore, the determinant value can be either 8 or -8.
(i) Vertices: (k, 0), (4, 0), (0, 2)
Let $(x_1, y_1) = (k, 0)$, $(x_2, y_2) = (4, 0)$, and $(x_3, y_3) = (0, 2)$.
The determinant is:
$D = \begin{vmatrix} k & 0 & 1 \\ 4 & 0 & 1 \\ 0 & 2 & 1 \end{vmatrix}$
Expand the determinant along the second column ($C_2$):
$D = 0 \cdot (\text{cofactor}) - 0 \cdot (\text{cofactor}) + 2 \cdot \begin{vmatrix} k & 1 \\ 4 & 1 \end{vmatrix}$
$D = 2 (k \cdot 1 - 1 \cdot 4)$
$D = 2 (k - 4)$
Given Area = 4, so $\frac{1}{2} |D| = 4$, which means $|D| = 8$.
$|2(k-4)| = 8$
$|2| |k-4| = 8$
$2 |k-4| = 8$
$|k-4| = 4$
This gives two possible cases:
Case 1: $k-4 = 4$
$k = 4 + 4 = 8$
Case 2: $k-4 = -4$
$k = 4 - 4 = 0$
The values of k are 0 and 8.
(ii) Vertices: (–2, 0), (0, 4), (0, k)
Let $(x_1, y_1) = (-2, 0)$, $(x_2, y_2) = (0, 4)$, and $(x_3, y_3) = (0, k)$.
The determinant is:
$D = \begin{vmatrix} -2 & 0 & 1 \\ 0 & 4 & 1 \\ 0 & k & 1 \end{vmatrix}$
Expand the determinant along the first column ($C_1$):
$D = -2 \cdot \begin{vmatrix} 4 & 1 \\ k & 1 \end{vmatrix} - 0 \cdot (\text{cofactor}) + 0 \cdot (\text{cofactor})$
$D = -2 (4 \cdot 1 - 1 \cdot k)$
$D = -2 (4 - k)$
$D = -8 + 2k$
Given Area = 4, so $\frac{1}{2} |D| = 4$, which means $|D| = 8$.
$|-8 + 2k| = 8$
$|2k - 8| = 8$
$|2(k - 4)| = 8$
$|2| |k - 4| = 8$
$2 |k - 4| = 8$
$|k - 4| = 4$
This gives two possible cases:
Case 1: $k-4 = 4$
$k = 4 + 4 = 8$
Case 2: $k-4 = -4$
$k = 4 - 4 = 0$
The values of k are 0 and 8.
Question 4.
(i) Find equation of line joining (1, 2) and (3, 6) using determinants.
(ii) Find equation of line joining (3, 1) and (9, 3) using determinants.
Answer:
To find the equation of the line joining two points $(x_1, y_1)$ and $(x_2, y_2)$ using determinants, we consider a general point $(x, y)$ on the line. For the three points $(x_1, y_1)$, $(x_2, y_2)$, and $(x, y)$ to be collinear, the area of the triangle formed by them must be zero.
The area of the triangle is given by:
$\text{Area} = \frac{1}{2} \left| \begin{vmatrix} x_1 & y_1 & 1 \\ x_2 & y_2 & 1 \\ x & y & 1 \end{vmatrix} \right|$
For collinear points, Area = 0, so the determinant must be zero:
$\begin{vmatrix} x_1 & y_1 & 1 \\ x_2 & y_2 & 1 \\ x & y & 1 \end{vmatrix} = 0$
(i) Points: (1, 2) and (3, 6)
Let $(x_1, y_1) = (1, 2)$ and $(x_2, y_2) = (3, 6)$. The general point is $(x, y)$.
The determinant for collinearity is:
$\begin{vmatrix} 1 & 2 & 1 \\ 3 & 6 & 1 \\ x & y & 1 \end{vmatrix} = 0$
Expand the determinant (e.g., along the first row):
$1 \cdot \begin{vmatrix} 6 & 1 \\ y & 1 \end{vmatrix} - 2 \cdot \begin{vmatrix} 3 & 1 \\ x & 1 \end{vmatrix} + 1 \cdot \begin{vmatrix} 3 & 6 \\ x & y \end{vmatrix} = 0$
$1(6 \cdot 1 - 1 \cdot y) - 2(3 \cdot 1 - 1 \cdot x) + 1(3 \cdot y - 6 \cdot x) = 0$
$(6 - y) - 2(3 - x) + (3y - 6x) = 0$
$6 - y - 6 + 2x + 3y - 6x = 0$
Combine like terms:
$(2x - 6x) + (-y + 3y) + (6 - 6) = 0$
$-4x + 2y = 0$
Dividing the entire equation by -2:
$2x - y = 0$
Thus, the equation of the line joining (1, 2) and (3, 6) is $2x - y = 0$ or $y = 2x$.
(ii) Points: (3, 1) and (9, 3)
Let $(x_1, y_1) = (3, 1)$ and $(x_2, y_2) = (9, 3)$. The general point is $(x, y)$.
The determinant for collinearity is:
$\begin{vmatrix} 3 & 1 & 1 \\ 9 & 3 & 1 \\ x & y & 1 \end{vmatrix} = 0$
Expand the determinant (e.g., along the first row):
$3 \cdot \begin{vmatrix} 3 & 1 \\ y & 1 \end{vmatrix} - 1 \cdot \begin{vmatrix} 9 & 1 \\ x & 1 \end{vmatrix} + 1 \cdot \begin{vmatrix} 9 & 3 \\ x & y \end{vmatrix} = 0$
$3(3 \cdot 1 - 1 \cdot y) - 1(9 \cdot 1 - 1 \cdot x) + 1(9 \cdot y - 3 \cdot x) = 0$
$3(3 - y) - (9 - x) + (9y - 3x) = 0$
$9 - 3y - 9 + x + 9y - 3x = 0$
Combine like terms:
$(x - 3x) + (-3y + 9y) + (9 - 9) = 0$
$-2x + 6y = 0$
Dividing the entire equation by -2:
$x - 3y = 0$
Thus, the equation of the line joining (3, 1) and (9, 3) is $x - 3y = 0$ or $y = \frac{1}{3}x$.
Question 5. If area of triangle is 35 sq units with vertices (2, – 6), (5, 4) and (k, 4). Then k is
(A) 12
(B) –2
(C) –12, –2
(D) 12, –2
Answer:
Solution:
The vertices of the triangle are given as $(x_1, y_1) = (2, -6)$, $(x_2, y_2) = (5, 4)$, and $(x_3, y_3) = (k, 4)$.
The area of the triangle is given as 35 sq units.
The area of a triangle with vertices $(x_1, y_1)$, $(x_2, y_2)$, and $(x_3, y_3)$ is given by the formula:
$\text{Area} = \frac{1}{2} \left| \begin{vmatrix} x_1 & y_1 & 1 \\ x_2 & y_2 & 1 \\ x_3 & y_3 & 1 \end{vmatrix} \right|$
Substitute the given vertices and area into the formula:
$35 = \frac{1}{2} \left| \begin{vmatrix} 2 & -6 & 1 \\ 5 & 4 & 1 \\ k & 4 & 1 \end{vmatrix} \right|$
Multiply both sides by 2:
$70 = \left| \begin{vmatrix} 2 & -6 & 1 \\ 5 & 4 & 1 \\ k & 4 & 1 \end{vmatrix} \right|$
Evaluate the determinant. Expand along the first row ($R_1$):
Determinant value $= 2 \begin{vmatrix} 4 & 1 \\ 4 & 1 \end{vmatrix} - (-6) \begin{vmatrix} 5 & 1 \\ k & 1 \end{vmatrix} + 1 \begin{vmatrix} 5 & 4 \\ k & 4 \end{vmatrix}$
$= 2 ((4)(1) - (1)(4)) + 6 ((5)(1) - (1)(k)) + 1 ((5)(4) - (4)(k))$
$= 2 (4 - 4) + 6 (5 - k) + (20 - 4k)$
$= 2 (0) + 30 - 6k + 20 - 4k$
$= 0 + 50 - 10k$
$= 50 - 10k$
Now, we have the equation involving the absolute value of the determinant:
$|50 - 10k| = 70$
This equation leads to two possibilities:
Case 1: $50 - 10k = 70$
$-10k = 70 - 50$
$-10k = 20$
$k = \frac{20}{-10}$
$k = -2$
Case 2: $50 - 10k = -70$
$-10k = -70 - 50$
$-10k = -120$
$k = \frac{-120}{-10}$
$k = 12$
The possible values of k are 12 and -2.
Comparing with the given options:
(A) 12
(B) –2
(C) –12, –2
(D) 12, –2
The correct option is (D).
Example 19 to 22 (Before Exercise 4.4)
Example 19: Find the minor of element 6 in the determinant ∆ = $\begin{vmatrix} 1&2&3\\4&5&6\\7&8&9 \end{vmatrix}$
Answer:
Given:
The determinant $\Delta = \begin{vmatrix} 1&2&3\\4&5&6\\7&8&9 \end{vmatrix}$.
The element is 6.
To Find:
The minor of the element 6.
Solution:
The element 6 is in the second row and the third column of the determinant.
Let $a_{ij}$ represent the element in the $i$-th row and $j$-th column. The element 6 is $a_{23}$.
The minor $M_{ij}$ of an element $a_{ij}$ is the determinant of the submatrix obtained by deleting the $i$-th row and $j$-th column.
For the element $a_{23}=6$, the minor is $M_{23}$.
Delete the 2nd row ($R_2$) and the 3rd column ($C_3$) from the determinant $\Delta$:
$\begin{vmatrix} 1&2&\cancel{3}\\ \cancel{4}&\cancel{5}&\cancel{6}\\ 7&8&\cancel{9} \end{vmatrix}$
The remaining submatrix is $\begin{vmatrix} 1&2\\7&8 \end{vmatrix}$.
Calculate the determinant of this submatrix to find the minor $M_{23}$:
$M_{23} = \begin{vmatrix} 1&2\\7&8 \end{vmatrix}$
$M_{23} = (1 \times 8) - (2 \times 7)$
$M_{23} = 8 - 14$
$M_{23} = -6$
The minor of the element 6 is -6.
Example 20: Find minors and cofactors of all the elements of the determinant $\begin{vmatrix} 1&−2\\4&3 \end{vmatrix}$
Answer:
Given:
The determinant $\begin{vmatrix} 1&−2\\4&3 \end{vmatrix}$.
To Find:
Minors and cofactors of all elements.
Solution:
Let the determinant be denoted by $\Delta$. The elements are $a_{11}=1$, $a_{12}=-2$, $a_{21}=4$, and $a_{22}=3$.
The Minor $M_{ij}$ of an element $a_{ij}$ is the determinant of the submatrix obtained by deleting the $i$-th row and $j$-th column.
The Cofactor $A_{ij}$ of an element $a_{ij}$ is given by $A_{ij} = (-1)^{i+j} M_{ij}$.
Calculating Minors:
Minor of $a_{11}=1$: $M_{11}$ is the determinant of the submatrix after deleting row 1 and column 1.
$M_{11} = \begin{vmatrix} 3 \end{vmatrix} = 3$
Minor of $a_{12}=-2$: $M_{12}$ is the determinant of the submatrix after deleting row 1 and column 2.
$M_{12} = \begin{vmatrix} 4 \end{vmatrix} = 4$
Minor of $a_{21}=4$: $M_{21}$ is the determinant of the submatrix after deleting row 2 and column 1.
$M_{21} = \begin{vmatrix} -2 \end{vmatrix} = -2$
Minor of $a_{22}=3$: $M_{22}$ is the determinant of the submatrix after deleting row 2 and column 2.
$M_{22} = \begin{vmatrix} 1 \end{vmatrix} = 1$
Calculating Cofactors:
Cofactor of $a_{11}=1$: $A_{11} = (-1)^{1+1} M_{11} = (-1)^2 \cdot 3 = 1 \cdot 3 = 3$
Cofactor of $a_{12}=-2$: $A_{12} = (-1)^{1+2} M_{12} = (-1)^3 \cdot 4 = -1 \cdot 4 = -4$
Cofactor of $a_{21}=4$: $A_{21} = (-1)^{2+1} M_{21} = (-1)^3 \cdot (-2) = -1 \cdot (-2) = 2$
Cofactor of $a_{22}=3$: $A_{22} = (-1)^{2+2} M_{22} = (-1)^4 \cdot 1 = 1 \cdot 1 = 1$
Summary of Minors and Cofactors:
Minor of element 1 ($a_{11}$): $M_{11} = 3$
Cofactor of element 1 ($a_{11}$): $A_{11} = 3$
Minor of element -2 ($a_{12}$): $M_{12} = 4$
Cofactor of element -2 ($a_{12}$): $A_{12} = -4$
Minor of element 4 ($a_{21}$): $M_{21} = -2$
Cofactor of element 4 ($a_{21}$): $A_{21} = 2$
Minor of element 3 ($a_{22}$): $M_{22} = 1$
Cofactor of element 3 ($a_{22}$): $A_{22} = 1$
Example 21: Find minors and cofactors of the elements a11 , a21 in the determinant
∆ = $\begin{vmatrix} a_{11}&a_{12}&a_{13}\\a_{21}&a_{22}&a_{23}\\a_{31}&a_{32}&a_{33} \end{vmatrix}$
Answer:
Given:
The determinant $\Delta = \begin{vmatrix} a_{11}&a_{12}&a_{13}\\a_{21}&a_{22}&a_{23}\\a_{31}&a_{32}&a_{33} \end{vmatrix}$.
The elements are $a_{11}$ and $a_{21}$.
To Find:
Minors and cofactors of the elements $a_{11}$ and $a_{21}$.
Solution:
The Minor $M_{ij}$ of an element $a_{ij}$ is the determinant of the submatrix obtained by deleting the $i$-th row and $j$-th column.
The Cofactor $A_{ij}$ of an element $a_{ij}$ is given by $A_{ij} = (-1)^{i+j} M_{ij}$.
For element $a_{11}$:
$a_{11}$ is in the 1st row ($i=1$) and 1st column ($j=1$).
To find the Minor $M_{11}$, delete the 1st row and 1st column:
$\begin{vmatrix} \cancel{a_{11}}&\cancel{a_{12}}&\cancel{a_{13}}\\ \cancel{a_{21}}&a_{22}&a_{23}\\ \cancel{a_{31}}&a_{32}&a_{33} \end{vmatrix}$
The remaining submatrix is $\begin{vmatrix} a_{22}&a_{23}\\a_{32}&a_{33} \end{vmatrix}$.
The Minor $M_{11}$ is the determinant of this submatrix:
$M_{11} = \begin{vmatrix} a_{22}&a_{23}\\a_{32}&a_{33} \end{vmatrix} = a_{22} a_{33} - a_{23} a_{32}$
The Cofactor $A_{11}$ is given by $A_{11} = (-1)^{1+1} M_{11}$:
$A_{11} = (-1)^2 (a_{22} a_{33} - a_{23} a_{32})$
$A_{11} = a_{22} a_{33} - a_{23} a_{32}$
For element $a_{21}$:
$a_{21}$ is in the 2nd row ($i=2$) and 1st column ($j=1$).
To find the Minor $M_{21}$, delete the 2nd row and 1st column:
$\begin{vmatrix} \cancel{a_{11}}&a_{12}&a_{13}\\ \cancel{a_{21}}&\cancel{a_{22}}&\cancel{a_{23}}\\ \cancel{a_{31}}&a_{32}&a_{33} \end{vmatrix}$
The remaining submatrix is $\begin{vmatrix} a_{12}&a_{13}\\a_{32}&a_{33} \end{vmatrix}$.
The Minor $M_{21}$ is the determinant of this submatrix:
$M_{21} = \begin{vmatrix} a_{12}&a_{13}\\a_{32}&a_{33} \end{vmatrix} = a_{12} a_{33} - a_{13} a_{32}$
The Cofactor $A_{21}$ is given by $A_{21} = (-1)^{2+1} M_{21}$:
$A_{21} = (-1)^3 (a_{12} a_{33} - a_{13} a_{32})$
$A_{21} = - (a_{12} a_{33} - a_{13} a_{32})$
$A_{21} = a_{13} a_{32} - a_{12} a_{33}$
Summary:
Minor of $a_{11}$: $M_{11} = a_{22} a_{33} - a_{23} a_{32}$
Cofactor of $a_{11}$: $A_{11} = a_{22} a_{33} - a_{23} a_{32}$
Minor of $a_{21}$: $M_{21} = a_{12} a_{33} - a_{13} a_{32}$
Cofactor of $a_{21}$: $A_{21} = a_{13} a_{32} - a_{12} a_{33}$
Example 22: Find minors and cofactors of the elements of the determinant $\begin{vmatrix} 2&−3&5\\6&0&4\\1&5&−7 \end{vmatrix}$ and verify that a11 A31 + a12 A32 + a13 A33= 0
Answer:
Given:
The determinant $\Delta = \begin{vmatrix} 2&−3&5\\6&0&4\\1&5&−7 \end{vmatrix}$.
To Find:
Minors and cofactors of all elements and verify $a_{11} A_{31} + a_{12} A_{32} + a_{13} A_{33}= 0$.
Solution:
The elements of the determinant are:
$a_{11}=2$, $a_{12}=-3$, $a_{13}=5$
$a_{21}=6$, $a_{22}=0$, $a_{23}=4$
$a_{31}=1$, $a_{32}=5$, $a_{33}=-7$
The Minor $M_{ij}$ of an element $a_{ij}$ is the determinant of the submatrix obtained by deleting the $i$-th row and $j$-th column.
The Cofactor $A_{ij}$ of an element $a_{ij}$ is given by $A_{ij} = (-1)^{i+j} M_{ij}$.
Calculating Minors:
$M_{11} = \begin{vmatrix} 0&4\\5&−7 \end{vmatrix} = (0)(-7) - (4)(5) = 0 - 20 = -20$
$M_{12} = \begin{vmatrix} 6&4\\1&−7 \end{vmatrix} = (6)(-7) - (4)(1) = -42 - 4 = -46$
$M_{13} = \begin{vmatrix} 6&0\\1&5 \end{vmatrix} = (6)(5) - (0)(1) = 30 - 0 = 30$
$M_{21} = \begin{vmatrix} −3&5\\5&−7 \end{vmatrix} = (-3)(-7) - (5)(5) = 21 - 25 = -4$
$M_{22} = \begin{vmatrix} 2&5\\1&−7 \end{vmatrix} = (2)(-7) - (5)(1) = -14 - 5 = -19$
$M_{23} = \begin{vmatrix} 2&−3\\1&5 \end{vmatrix} = (2)(5) - (-3)(1) = 10 - (-3) = 13$
$M_{31} = \begin{vmatrix} −3&5\\0&4 \end{vmatrix} = (-3)(4) - (5)(0) = -12 - 0 = -12$
$M_{32} = \begin{vmatrix} 2&5\\6&4 \end{vmatrix} = (2)(4) - (5)(6) = 8 - 30 = -22$
$M_{33} = \begin{vmatrix} 2&−3\\6&0 \end{vmatrix} = (2)(0) - (-3)(6) = 0 - (-18) = 18$
Calculating Cofactors:
$A_{11} = (-1)^{1+1} M_{11} = (1)(-20) = -20$
$A_{12} = (-1)^{1+2} M_{12} = (-1)(-46) = 46$
$A_{13} = (-1)^{1+3} M_{13} = (1)(30) = 30$
$A_{21} = (-1)^{2+1} M_{21} = (-1)(-4) = 4$
$A_{22} = (-1)^{2+2} M_{22} = (1)(-19) = -19$
$A_{23} = (-1)^{2+3} M_{23} = (-1)(13) = -13$
$A_{31} = (-1)^{3+1} M_{31} = (1)(-12) = -12$
$A_{32} = (-1)^{3+2} M_{32} = (-1)(-22) = 22$
$A_{33} = (-1)^{3+3} M_{33} = (1)(18) = 18$
Verification: $a_{11} A_{31} + a_{12} A_{32} + a_{13} A_{33}= 0$
We need to calculate the sum of the products of the elements of the first row ($a_{11}, a_{12}, a_{13}$) with the corresponding cofactors of the third row ($A_{31}, A_{32}, A_{33}$).
Left Hand Side (LHS) $= a_{11} A_{31} + a_{12} A_{32} + a_{13} A_{33}$
Substitute the values:
LHS $= (2)(-12) + (-3)(22) + (5)(18)$
LHS $= -24 - 66 + 90$
LHS $= -90 + 90$
LHS $= 0$
Right Hand Side (RHS) $= 0$
Since LHS = RHS, the identity is verified.
This confirms the property that the sum of the product of elements of a row (or a column) with the cofactors of corresponding elements of another row (or column) is zero.
Exercise 4.4
Write Minors and Cofactors of the elements of following determinants:
Question 1.
(i) $\begin{vmatrix} 2&−4\\0&3 \end{vmatrix}$
(ii) $\begin{vmatrix} a&c\\b&d \end{vmatrix}$
Answer:
The Minor $M_{ij}$ of an element $a_{ij}$ is the determinant of the submatrix obtained by deleting the $i$-th row and $j$-th column.
The Cofactor $A_{ij}$ of an element $a_{ij}$ is given by $A_{ij} = (-1)^{i+j} M_{ij}$.
(i) Determinant: $\begin{vmatrix} 2&−4\\0&3 \end{vmatrix}$
The elements are $a_{11}=2$, $a_{12}=-4$, $a_{21}=0$, and $a_{22}=3$.
Minors:
$M_{11} = \begin{vmatrix} 3 \end{vmatrix} = 3$
$M_{12} = \begin{vmatrix} 0 \end{vmatrix} = 0$
$M_{21} = \begin{vmatrix} -4 \end{vmatrix} = -4$
$M_{22} = \begin{vmatrix} 2 \end{vmatrix} = 2$
Cofactors:
$A_{11} = (-1)^{1+1} M_{11} = (-1)^2 \cdot 3 = 1 \cdot 3 = 3$
$A_{12} = (-1)^{1+2} M_{12} = (-1)^3 \cdot 0 = -1 \cdot 0 = 0$
$A_{21} = (-1)^{2+1} M_{21} = (-1)^3 \cdot (-4) = -1 \cdot (-4) = 4$
$A_{22} = (-1)^{2+2} M_{22} = (-1)^4 \cdot 2 = 1 \cdot 2 = 2$
(ii) Determinant: $\begin{vmatrix} a&c\\b&d \end{vmatrix}$
The elements are $a_{11}=a$, $a_{12}=c$, $a_{21}=b$, and $a_{22}=d$.
Minors:
$M_{11} = \begin{vmatrix} d \end{vmatrix} = d$
$M_{12} = \begin{vmatrix} b \end{vmatrix} = b$
$M_{21} = \begin{vmatrix} c \end{vmatrix} = c$
$M_{22} = \begin{vmatrix} a \end{vmatrix} = a$
Cofactors:
$A_{11} = (-1)^{1+1} M_{11} = (-1)^2 \cdot d = 1 \cdot d = d$
$A_{12} = (-1)^{1+2} M_{12} = (-1)^3 \cdot b = -1 \cdot b = -b$
$A_{21} = (-1)^{2+1} M_{21} = (-1)^3 \cdot c = -1 \cdot c = -c$
$A_{22} = (-1)^{2+2} M_{22} = (-1)^4 \cdot a = 1 \cdot a = a$
Question 2.
(i) $\begin{vmatrix} 1&0&0\\0&1&0\\0&0&1 \end{vmatrix}$
(ii) $\begin{vmatrix} 1&0&4\\3&5&−1\\0&1&2 \end{vmatrix}$
Answer:
The Minor $M_{ij}$ of an element $a_{ij}$ is the determinant of the submatrix obtained by deleting the $i$-th row and $j$-th column.
The Cofactor $A_{ij}$ of an element $a_{ij}$ is given by $A_{ij} = (-1)^{i+j} M_{ij}$.
(i) Determinant: $\begin{vmatrix} 1&0&0\\0&1&0\\0&0&1 \end{vmatrix}$
The elements are $a_{11}=1$, $a_{12}=0$, $a_{13}=0$, $a_{21}=0$, $a_{22}=1$, $a_{23}=0$, $a_{31}=0$, $a_{32}=0$, $a_{33}=1$.
Minors:
$M_{11} = \begin{vmatrix} 1&0\\0&1 \end{vmatrix} = (1)(1) - (0)(0) = 1$
$M_{12} = \begin{vmatrix} 0&0\\0&1 \end{vmatrix} = (0)(1) - (0)(0) = 0$
$M_{13} = \begin{vmatrix} 0&1\\0&0 \end{vmatrix} = (0)(0) - (1)(0) = 0$
$M_{21} = \begin{vmatrix} 0&0\\0&1 \end{vmatrix} = (0)(1) - (0)(0) = 0$
$M_{22} = \begin{vmatrix} 1&0\\0&1 \end{vmatrix} = (1)(1) - (0)(0) = 1$
$M_{23} = \begin{vmatrix} 1&0\\0&0 \end{vmatrix} = (1)(0) - (0)(0) = 0$
$M_{31} = \begin{vmatrix} 0&0\\1&0 \end{vmatrix} = (0)(0) - (0)(1) = 0$
$M_{32} = \begin{vmatrix} 1&0\\0&0 \end{vmatrix} = (1)(0) - (0)(0) = 0$
$M_{33} = \begin{vmatrix} 1&0\\0&1 \end{vmatrix} = (1)(1) - (0)(0) = 1$
Cofactors:
$A_{11} = (-1)^{1+1} M_{11} = 1 \cdot 1 = 1$
$A_{12} = (-1)^{1+2} M_{12} = -1 \cdot 0 = 0$
$A_{13} = (-1)^{1+3} M_{13} = 1 \cdot 0 = 0$
$A_{21} = (-1)^{2+1} M_{21} = -1 \cdot 0 = 0$
$A_{22} = (-1)^{2+2} M_{22} = 1 \cdot 1 = 1$
$A_{23} = (-1)^{2+3} M_{23} = -1 \cdot 0 = 0$
$A_{31} = (-1)^{3+1} M_{31} = 1 \cdot 0 = 0$
$A_{32} = (-1)^{3+2} M_{32} = -1 \cdot 0 = 0$
$A_{33} = (-1)^{3+3} M_{33} = 1 \cdot 1 = 1$
(ii) Determinant: $\begin{vmatrix} 1&0&4\\3&5&−1\\0&1&2 \end{vmatrix}$
The elements are $a_{11}=1$, $a_{12}=0$, $a_{13}=4$, $a_{21}=3$, $a_{22}=5$, $a_{23}=-1$, $a_{31}=0$, $a_{32}=1$, $a_{33}=2$.
Minors:
$M_{11} = \begin{vmatrix} 5&−1\\1&2 \end{vmatrix} = (5)(2) - (−1)(1) = 10 + 1 = 11$
$M_{12} = \begin{vmatrix} 3&−1\\0&2 \end{vmatrix} = (3)(2) - (−1)(0) = 6 - 0 = 6$
$M_{13} = \begin{vmatrix} 3&5\\0&1 \end{vmatrix} = (3)(1) - (5)(0) = 3 - 0 = 3$
$M_{21} = \begin{vmatrix} 0&4\\1&2 \end{vmatrix} = (0)(2) - (4)(1) = 0 - 4 = -4$
$M_{22} = \begin{vmatrix} 1&4\\0&2 \end{vmatrix} = (1)(2) - (4)(0) = 2 - 0 = 2$
$M_{23} = \begin{vmatrix} 1&0\\0&1 \end{vmatrix} = (1)(1) - (0)(0) = 1 - 0 = 1$
$M_{31} = \begin{vmatrix} 0&4\\5&−1 \end{vmatrix} = (0)(−1) - (4)(5) = 0 - 20 = -20$
$M_{32} = \begin{vmatrix} 1&4\\3&−1 \end{vmatrix} = (1)(−1) - (4)(3) = -1 - 12 = -13$
$M_{33} = \begin{vmatrix} 1&0\\3&5 \end{vmatrix} = (1)(5) - (0)(3) = 5 - 0 = 5$
Cofactors:
$A_{11} = (-1)^{1+1} M_{11} = 1 \cdot 11 = 11$
$A_{12} = (-1)^{1+2} M_{12} = -1 \cdot 6 = -6$
$A_{13} = (-1)^{1+3} M_{13} = 1 \cdot 3 = 3$
$A_{21} = (-1)^{2+1} M_{21} = -1 \cdot (-4) = 4$
$A_{22} = (-1)^{2+2} M_{22} = 1 \cdot 2 = 2$
$A_{23} = (-1)^{2+3} M_{23} = -1 \cdot 1 = -1$
$A_{31} = (-1)^{3+1} M_{31} = 1 \cdot (-20) = -20$
$A_{32} = (-1)^{3+2} M_{32} = -1 \cdot (-13) = 13$
$A_{33} = (-1)^{3+3} M_{33} = 1 \cdot 5 = 5$
Question 3. Using Cofactors of elements of second row, evaluate ∆ = $\begin{vmatrix} 5&3&8\\2&0&1\\1&2&3 \end{vmatrix}$ .
Answer:
Given:
The determinant $\Delta = \begin{vmatrix} 5&3&8\\2&0&1\\1&2&3 \end{vmatrix}$.
To Evaluate:
The value of the determinant using cofactors of the second row.
Solution:
The elements of the second row are $a_{21}=2$, $a_{22}=0$, and $a_{23}=1$.
The determinant $\Delta$ can be evaluated using the cofactors of the second row by the formula:
$\Delta = a_{21} A_{21} + a_{22} A_{22} + a_{23} A_{23}$
where $A_{ij}$ is the cofactor of the element $a_{ij}$.
First, we find the minors $M_{ij}$ for the elements in the second row:
$M_{21} = \begin{vmatrix} 3&8\\2&3 \end{vmatrix} = (3)(3) - (8)(2) = 9 - 16 = -7$
$M_{22} = \begin{vmatrix} 5&8\\1&3 \end{vmatrix} = (5)(3) - (8)(1) = 15 - 8 = 7$
$M_{23} = \begin{vmatrix} 5&3\\1&2 \end{vmatrix} = (5)(2) - (3)(1) = 10 - 3 = 7$
Now, we find the cofactors $A_{ij}$ using the formula $A_{ij} = (-1)^{i+j} M_{ij}$:
$A_{21} = (-1)^{2+1} M_{21} = (-1)^3 (-7) = -1 \cdot (-7) = 7$
$A_{22} = (-1)^{2+2} M_{22} = (-1)^4 (7) = 1 \cdot 7 = 7$
$A_{23} = (-1)^{2+3} M_{23} = (-1)^5 (7) = -1 \cdot 7 = -7$
Finally, substitute the elements of the second row and their cofactors into the determinant formula:
$\Delta = a_{21} A_{21} + a_{22} A_{22} + a_{23} A_{23}$
$\Delta = (2)(7) + (0)(7) + (1)(-7)$
$\Delta = 14 + 0 - 7$
$\Delta = 7$
The value of the determinant is 7.
Question 4. Using Cofactors of elements of third column, evaluate ∆ = $\begin{vmatrix} 1&x&yz\\1&y&zx\\1&z&xy \end{vmatrix}$ .
Answer:
Given:
The determinant $\Delta = \begin{vmatrix} 1&x&yz\\1&y&zx\\1&z&xy \end{vmatrix}$.
To Evaluate:
The value of the determinant using cofactors of the third column.
Solution:
The elements of the third column are $a_{13}=yz$, $a_{23}=zx$, and $a_{33}=xy$.
The determinant $\Delta$ can be evaluated using the cofactors of the third column by the formula:
$\Delta = a_{13} A_{13} + a_{23} A_{23} + a_{33} A_{33}$
where $A_{ij}$ is the cofactor of the element $a_{ij}$, given by $A_{ij} = (-1)^{i+j} M_{ij}$.
First, we find the minors $M_{ij}$ for the elements in the third column:
$M_{13} = \begin{vmatrix} 1&y\\1&z \end{vmatrix} = (1)(z) - (y)(1) = z - y$
$M_{23} = \begin{vmatrix} 1&x\\1&z \end{vmatrix} = (1)(z) - (x)(1) = z - x$
$M_{33} = \begin{vmatrix} 1&x\\1&y \end{vmatrix} = (1)(y) - (x)(1) = y - x$
Now, we find the cofactors $A_{ij}$:
$A_{13} = (-1)^{1+3} M_{13} = (-1)^4 (z - y) = 1 \cdot (z - y) = z - y$
$A_{23} = (-1)^{2+3} M_{23} = (-1)^5 (z - x) = -1 \cdot (z - x) = x - z$
$A_{33} = (-1)^{3+3} M_{33} = (-1)^6 (y - x) = 1 \cdot (y - x) = y - x$
Finally, substitute the elements of the third column and their cofactors into the determinant formula:
$\Delta = a_{13} A_{13} + a_{23} A_{23} + a_{33} A_{33}$
$\Delta = (yz)(z - y) + (zx)(x - z) + (xy)(y - x)$
Expand and simplify the expression:
$\Delta = yz^2 - y^2z + zx^2 - z^2x + xy^2 - x^2y$
Rearrange the terms, typically in cyclic order $(x-y)(y-z)(z-x)$:
$\Delta = -x^2y + x^2z + xy^2 - y^2z - xz^2 + yz^2$
Group terms with common factors:
$\Delta = x^2(z - y) + x(y^2 - z^2) + yz(z - y)$
Factor $y^2 - z^2$ as $(y-z)(y+z)$:
$\Delta = x^2(z - y) + x(y-z)(y+z) + yz(z - y)$
Rewrite $(y-z)$ as $-(z-y)$:
$\Delta = x^2(z - y) - x(z-y)(y+z) + yz(z - y)$
Factor out the common term $(z - y)$:
$\Delta = (z - y) [x^2 - x(y+z) + yz]$
Simplify the expression inside the bracket:
$\Delta = (z - y) [x^2 - xy - xz + yz]$
Factor the quadratic expression by grouping terms:
$x^2 - xy - xz + yz = x(x - y) - z(x - y) = (x - y)(x - z)$
Substitute this back into the expression for $\Delta$:
$\Delta = (z - y)(x - y)(x - z)$
To write this in the standard cyclic form $(x-y)(y-z)(z-x)$, we rearrange the terms. $(z-y) = -(y-z)$ and $(x-z) = -(z-x)$.
$\Delta = -(y-z) \cdot (x-y) \cdot -(z-x)$
$\Delta = (-1)(-1) (x-y)(y-z)(z-x)$
$\Delta = (x-y)(y-z)(z-x)$
The value of the determinant is $(x-y)(y-z)(z-x)$.
Question 5. If ∆ = $\begin{vmatrix} a_{11}&a_{12}&a_{13}\\a_{21}&a_{22}&a_{23}\\a_{31}&a_{32}&a_{33} \end{vmatrix}$ and Aij is Cofactors of aij , then value of ∆ is given by
(A) a11 A31+ a12 A32 + a13 A33
(B) a11 A11+ a12 A21 + a13 A31
(C) a21 A11+ a22 A12 + a23 A13
(D) a11 A11+ a21 A21 + a31 A31
Answer:
Solution:
The value of a determinant can be calculated by summing the products of the elements of any one row (or column) with their corresponding cofactors.
Let the determinant be $\Delta$. For a 3x3 matrix, the value of the determinant can be expanded along any row $i$ as:
$\Delta = a_{i1}A_{i1} + a_{i2}A_{i2} + a_{i3}A_{i3}$
or along any column $j$ as:
$\Delta = a_{1j}A_{1j} + a_{2j}A_{2j} + a_{3j}A_{3j}$
Let's examine the given options:
(A) $a_{11} A_{31} + a_{12} A_{32} + a_{13} A_{33}$ : This is the sum of the product of elements of the first row ($a_{11}, a_{12}, a_{13}$) with the cofactors of the third row ($A_{31}, A_{32}, A_{33}$). The sum of the product of elements of a row (or column) with the cofactors of another row (or column) is always zero.
(B) $a_{11} A_{11} + a_{12} A_{21} + a_{13} A_{31}$ : This is the sum of elements of the first row ($a_{11}, a_{12}, a_{13}$) with the cofactors of the first column ($A_{11}, A_{21}, A_{31}$). This is not a valid expansion.
(C) $a_{21} A_{11} + a_{22} A_{12} + a_{23} A_{13}$ : This is the sum of elements of the second row ($a_{21}, a_{22}, a_{23}$) with the cofactors of the first row ($A_{11}, A_{12}, A_{13}$). This sum is also zero.
(D) $a_{11} A_{11} + a_{21} A_{21} + a_{31} A_{31}$ : This is the sum of the product of elements of the first column ($a_{11}, a_{21}, a_{31}$) with their corresponding cofactors ($A_{11}, A_{21}, A_{31}$). This is a valid expansion along the first column.
Therefore, the correct expression for the value of $\Delta$ is the sum of the product of elements of a row (or column) with their corresponding cofactors.
Option (D) represents the expansion along the first column.
$\Delta = a_{11}A_{11} + a_{21}A_{21} + a_{31}A_{31}$
The correct option is (D) $a_{11} A_{11}+ a_{21} A_{21} + a_{31} A_{31}$.
Example 23 to 26 (Before Exercise 4.5)
Example 23: Find adj A for A = $\begin{bmatrix}2&3\\1&4 \end{bmatrix}$
Answer:
Given:
The given matrix is:
A = $\begin{bmatrix}2&3\\1&4 \end{bmatrix}$
To Find:
The adjoint of matrix A, denoted as adj A.
Solution:
The adjoint of a matrix is the transpose of its cofactor matrix. First, we need to find the cofactors of each element of matrix A.
The matrix A can be represented as:
A = $\begin{bmatrix}a_{11}&a_{12}\\a_{21}&a_{22} \end{bmatrix} = \begin{bmatrix}2&3\\1&4 \end{bmatrix}$
The cofactor $C_{ij}$ of an element $a_{ij}$ is given by the formula $C_{ij} = (-1)^{i+j}M_{ij}$, where $M_{ij}$ is the minor of the element $a_{ij}$.
1. Cofactor of $a_{11}$ (which is 2):
$C_{11} = (-1)^{1+1} M_{11} = (1)(4) = 4$
2. Cofactor of $a_{12}$ (which is 3):
$C_{12} = (-1)^{1+2} M_{12} = (-1)(1) = -1$
3. Cofactor of $a_{21}$ (which is 1):
$C_{21} = (-1)^{2+1} M_{21} = (-1)(3) = -3$
4. Cofactor of $a_{22}$ (which is 4):
$C_{22} = (-1)^{2+2} M_{22} = (1)(2) = 2$
Now, we form the cofactor matrix by placing the cofactors in their corresponding positions:
Cofactor(A) = $\begin{bmatrix}C_{11}&C_{12}\\C_{21}&C_{22} \end{bmatrix} = \begin{bmatrix}4&-1\\-3&2 \end{bmatrix}$
The adjoint of A is the transpose of the cofactor matrix.
adj A = (Cofactor(A))$^T$ = $\begin{bmatrix}4&-1\\-3&2 \end{bmatrix}^T$
adj A = $\begin{bmatrix}4&-3\\-1&2 \end{bmatrix}$
Thus, the adjoint of the matrix A is $\begin{bmatrix}4&-3\\-1&2 \end{bmatrix}$.
Alternate Solution (Shortcut for 2x2 Matrix):
For any 2x2 matrix, A = $\begin{bmatrix}a&b\\c&d \end{bmatrix}$, its adjoint can be found directly by:
1. Swapping the elements on the main diagonal (a and d).
2. Changing the signs of the off-diagonal elements (b and c).
adj A = $\begin{bmatrix}d&-b\\-c&a \end{bmatrix}$
For the given matrix A = $\begin{bmatrix}2&3\\1&4 \end{bmatrix}$:
1. Swap 2 and 4 to get $\begin{bmatrix}4&...\\...&2 \end{bmatrix}$.
2. Change the signs of 3 and 1 to get $\begin{bmatrix}...&-3\\-1&... \end{bmatrix}$.
Combining these steps, we get:
adj A = $\begin{bmatrix}4&-3\\-1&2 \end{bmatrix}$
This result matches the one obtained by the formal method.
Example 24: If A = $\begin{bmatrix}1&3&3\\1&4&3\\1&3&4 \end{bmatrix}$ , then verify that A adj A = | A| I. Also find A–1.
Answer:
Given:
$A = \begin{bmatrix}1&3&3\\1&4&3\\1&3&4 \end{bmatrix}$
To Verify:
$A \text{ adj } A = |A| I$
To Find:
$A^{-1}$
Solution:
First, we find the determinant of the matrix A.
$|A| = \det(A) = \begin{vmatrix}1&3&3\\1&4&3\\1&3&4 \end{vmatrix}$
Expanding along the first row:
$|A| = 1 \begin{vmatrix} 4 & 3 \\ 3 & 4 \end{vmatrix} - 3 \begin{vmatrix} 1 & 3 \\ 1 & 4 \end{vmatrix} + 3 \begin{vmatrix} 1 & 4 \\ 1 & 3 \end{vmatrix}$
$|A| = 1(4 \times 4 - 3 \times 3) - 3(1 \times 4 - 3 \times 1) + 3(1 \times 3 - 4 \times 1)$
$|A| = 1(16 - 9) - 3(4 - 3) + 3(3 - 4)$
$|A| = 1(7) - 3(1) + 3(-1)$
$|A| = 7 - 3 - 3 = 1$
So, $|A| = 1$. Since $|A| \neq 0$, the matrix A is invertible.
Next, we find the adjoint of A. First, we calculate the matrix of cofactors.
$C_{11} = (-1)^{1+1} \begin{vmatrix} 4 & 3 \\ 3 & 4 \end{vmatrix} = (1)(16 - 9) = 7$
$C_{12} = (-1)^{1+2} \begin{vmatrix} 1 & 3 \\ 1 & 4 \end{vmatrix} = (-1)(4 - 3) = -1$
$C_{13} = (-1)^{1+3} \begin{vmatrix} 1 & 4 \\ 1 & 3 \end{vmatrix} = (1)(3 - 4) = -1$
$C_{21} = (-1)^{2+1} \begin{vmatrix} 3 & 3 \\ 3 & 4 \end{vmatrix} = (-1)(12 - 9) = -3$
$C_{22} = (-1)^{2+2} \begin{vmatrix} 1 & 3 \\ 1 & 4 \end{vmatrix} = (1)(4 - 3) = 1$
$C_{23} = (-1)^{2+3} \begin{vmatrix} 1 & 3 \\ 1 & 3 \end{vmatrix} = (-1)(3 - 3) = 0$
$C_{31} = (-1)^{3+1} \begin{vmatrix} 3 & 3 \\ 4 & 3 \end{vmatrix} = (1)(9 - 12) = -3$
$C_{32} = (-1)^{3+2} \begin{vmatrix} 1 & 3 \\ 1 & 3 \end{vmatrix} = (-1)(3 - 3) = 0$
$C_{33} = (-1)^{3+3} \begin{vmatrix} 1 & 3 \\ 1 & 4 \end{vmatrix} = (1)(4 - 3) = 1$
The matrix of cofactors is:
$\text{Cofactor}(A) = \begin{bmatrix} 7 & -1 & -1 \\ -3 & 1 & 0 \\ -3 & 0 & 1 \end{bmatrix}$
The adjoint of A is the transpose of the matrix of cofactors:
$\text{adj } A = (\text{Cofactor}(A))^T = \begin{bmatrix} 7 & -3 & -3 \\ -1 & 1 & 0 \\ -1 & 0 & 1 \end{bmatrix}$
Now, we verify the property $A \text{ adj } A = |A| I$.
First, calculate $|A| I$:
$|A| I = 1 \cdot \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}$
Next, calculate $A \text{ adj } A$:
$A \text{ adj } A = \begin{bmatrix}1&3&3\\1&4&3\\1&3&4 \end{bmatrix} \begin{bmatrix} 7 & -3 & -3 \\ -1 & 1 & 0 \\ -1 & 0 & 1 \end{bmatrix}$
Multiplying the matrices:
$A \text{ adj } A = \begin{bmatrix} 1(7)+3(-1)+3(-1) & 1(-3)+3(1)+3(0) & 1(-3)+3(0)+3(1) \\ 1(7)+4(-1)+3(-1) & 1(-3)+4(1)+3(0) & 1(-3)+4(0)+3(1) \\ 1(7)+3(-1)+4(-1) & 1(-3)+3(1)+4(0) & 1(-3)+3(0)+4(1) \end{bmatrix}$
$A \text{ adj } A = \begin{bmatrix} 7-3-3 & -3+3+0 & -3+0+3 \\ 7-4-3 & -3+4+0 & -3+0+3 \\ 7-3-4 & -3+3+0 & -3+0+4 \end{bmatrix}$
$A \text{ adj } A = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}$
Since $A \text{ adj } A = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}$ and $|A| I = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}$, we have verified that $A \text{ adj } A = |A| I$.
Finally, we find the inverse of A using the formula $A^{-1} = \frac{1}{|A|} \text{ adj } A$.
Since $|A| = 1$ and $\text{adj } A = \begin{bmatrix} 7 & -3 & -3 \\ -1 & 1 & 0 \\ -1 & 0 & 1 \end{bmatrix}$,
$A^{-1} = \frac{1}{1} \begin{bmatrix} 7 & -3 & -3 \\ -1 & 1 & 0 \\ -1 & 0 & 1 \end{bmatrix}$
$A^{-1} = \begin{bmatrix} 7 & -3 & -3 \\ -1 & 1 & 0 \\ -1 & 0 & 1 \end{bmatrix}$
Example 25: If A = $\begin{bmatrix}2&3\\1&−4 \end{bmatrix}$ and B = $\begin{bmatrix}1&−2\\−1&3 \end{bmatrix}$ , then verify that (AB)-1 = B-1A-1
Answer:
Given:
$A = \begin{bmatrix}2&3\\1&−4 \end{bmatrix}$
$B = \begin{bmatrix}1&−2\\−1&3 \end{bmatrix}$
To Verify:
$(AB)^{-1} = B^{-1}A^{-1}$
Solution:
First, we calculate the product matrix AB.
$AB = \begin{bmatrix}2&3\\1&−4 \end{bmatrix} \begin{bmatrix}1&−2\\−1&3 \end{bmatrix}$
$AB = \begin{bmatrix} (2)(1)+(3)(-1) & (2)(-2)+(3)(3) \\ (1)(1)+(-4)(-1) & (1)(-2)+(-4)(3) \end{bmatrix}$
$AB = \begin{bmatrix} 2-3 & -4+9 \\ 1+4 & -2-12 \end{bmatrix}$
$AB = \begin{bmatrix} -1 & 5 \\ 5 & -14 \end{bmatrix}$
Next, we find the inverse of AB, $(AB)^{-1}$. We need the determinant and the adjoint of AB.
$\det(AB) = (-1)(-14) - (5)(5) = 14 - 25 = -11$
Since $\det(AB) \neq 0$, $(AB)$ is invertible.
The adjoint of a $2 \times 2$ matrix $\begin{bmatrix} a & b \\ c & d \end{bmatrix}$ is $\begin{bmatrix} d & -b \\ -c & a \end{bmatrix}$.
$\text{adj}(AB) = \text{adj}\begin{bmatrix} -1 & 5 \\ 5 & -14 \end{bmatrix} = \begin{bmatrix} -14 & -5 \\ -5 & -1 \end{bmatrix}$
So, $(AB)^{-1} = \frac{1}{\det(AB)} \text{adj}(AB) = \frac{1}{-11} \begin{bmatrix} -14 & -5 \\ -5 & -1 \end{bmatrix}$
$(AB)^{-1} = \begin{bmatrix} \frac{-14}{-11} & \frac{-5}{-11} \\ \frac{-5}{-11} & \frac{-1}{-11} \end{bmatrix} = \begin{bmatrix} \frac{14}{11} & \frac{5}{11} \\ \frac{5}{11} & \frac{1}{11} \end{bmatrix}$ ... (i)
Now, we find the inverses of A and B, $A^{-1}$ and $B^{-1}$.
For matrix A:
$A = \begin{bmatrix}2&3\\1&−4 \end{bmatrix}$
$\det(A) = (2)(-4) - (3)(1) = -8 - 3 = -11$
Since $\det(A) \neq 0$, A is invertible.
$\text{adj}(A) = \begin{bmatrix} -4 & -3 \\ -1 & 2 \end{bmatrix}$
$A^{-1} = \frac{1}{\det(A)} \text{adj}(A) = \frac{1}{-11} \begin{bmatrix} -4 & -3 \\ -1 & 2 \end{bmatrix} = \begin{bmatrix} \frac{4}{11} & \frac{3}{11} \\ \frac{1}{11} & -\frac{2}{11} \end{bmatrix}$
For matrix B:
$B = \begin{bmatrix}1&−2\\−1&3 \end{bmatrix}$
$\det(B) = (1)(3) - (-2)(-1) = 3 - 2 = 1$
Since $\det(B) \neq 0$, B is invertible.
$\text{adj}(B) = \begin{bmatrix} 3 & 2 \\ 1 & 1 \end{bmatrix}$
$B^{-1} = \frac{1}{\det(B)} \text{adj}(B) = \frac{1}{1} \begin{bmatrix} 3 & 2 \\ 1 & 1 \end{bmatrix} = \begin{bmatrix} 3 & 2 \\ 1 & 1 \end{bmatrix}$
Finally, we calculate $B^{-1}A^{-1}$.
$B^{-1}A^{-1} = \begin{bmatrix} 3 & 2 \\ 1 & 1 \end{bmatrix} \begin{bmatrix} \frac{4}{11} & \frac{3}{11} \\ \frac{1}{11} & -\frac{2}{11} \end{bmatrix}$
$B^{-1}A^{-1} = \begin{bmatrix} (3)(\frac{4}{11})+(2)(\frac{1}{11}) & (3)(\frac{3}{11})+(2)(-\frac{2}{11}) \\ (1)(\frac{4}{11})+(1)(\frac{1}{11}) & (1)(\frac{3}{11})+(1)(-\frac{2}{11}) \end{bmatrix}$
$B^{-1}A^{-1} = \begin{bmatrix} \frac{12}{11}+\frac{2}{11} & \frac{9}{11}-\frac{4}{11} \\ \frac{4}{11}+\frac{1}{11} & \frac{3}{11}-\frac{2}{11} \end{bmatrix}$
$B^{-1}A^{-1} = \begin{bmatrix} \frac{14}{11} & \frac{5}{11} \\ \frac{5}{11} & \frac{1}{11} \end{bmatrix}$ ... (ii)
Comparing the results from (i) and (ii), we see that $(AB)^{-1} = B^{-1}A^{-1}$.
Thus, the property is verified.
Example 26: Show that the matrix A = $\begin{bmatrix}2&3\\1&2 \end{bmatrix}$ satisfies the equation A2 – 4A + I = O, where I is 2 × 2 identity matrix and O is 2 × 2 zero matrix. Using this equation, find A–1.
Answer:
Given:
$A = \begin{bmatrix}2&3\\1&2 \end{bmatrix}$
I is the $2 \times 2$ identity matrix, $I = \begin{bmatrix}1&0\\0&1 \end{bmatrix}$
O is the $2 \times 2$ zero matrix, $O = \begin{bmatrix}0&0\\0&0 \end{bmatrix}$
To Show:
$A^2 - 4A + I = O$
To Find:
$A^{-1}$ using the given equation.
Solution:
First, we calculate $A^2$.
$A^2 = A \cdot A = \begin{bmatrix}2&3\\1&2 \end{bmatrix} \begin{bmatrix}2&3\\1&2 \end{bmatrix}$
$A^2 = \begin{bmatrix} (2)(2)+(3)(1) & (2)(3)+(3)(2) \\ (1)(2)+(2)(1) & (1)(3)+(2)(2) \end{bmatrix}$
$A^2 = \begin{bmatrix} 4+3 & 6+6 \\ 2+2 & 3+4 \end{bmatrix} = \begin{bmatrix} 7 & 12 \\ 4 & 7 \end{bmatrix}$
Next, we calculate $4A$.
$4A = 4 \begin{bmatrix}2&3\\1&2 \end{bmatrix} = \begin{bmatrix} 4(2) & 4(3) \\ 4(1) & 4(2) \end{bmatrix} = \begin{bmatrix} 8 & 12 \\ 4 & 8 \end{bmatrix}$
Now, we substitute the values into the expression $A^2 - 4A + I$:
$A^2 - 4A + I = \begin{bmatrix} 7 & 12 \\ 4 & 7 \end{bmatrix} - \begin{bmatrix} 8 & 12 \\ 4 & 8 \end{bmatrix} + \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}$
$A^2 - 4A + I = \begin{bmatrix} 7-8 & 12-12 \\ 4-4 & 7-8 \end{bmatrix} + \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}$
$A^2 - 4A + I = \begin{bmatrix} -1 & 0 \\ 0 & -1 \end{bmatrix} + \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}$
$A^2 - 4A + I = \begin{bmatrix} -1+1 & 0+0 \\ 0+0 & -1+1 \end{bmatrix} = \begin{bmatrix} 0 & 0 \\ 0 & 0 \end{bmatrix}$
Since $\begin{bmatrix} 0 & 0 \\ 0 & 0 \end{bmatrix}$ is the zero matrix O, we have shown that $A^2 - 4A + I = O$.
Now, we use the equation $A^2 - 4A + I = O$ to find $A^{-1}$.
We know that $A$ is invertible because $|A| = (2)(2) - (3)(1) = 4 - 3 = 1 \neq 0$.
Multiply the equation by $A^{-1}$ from the left:
$A^{-1}(A^2 - 4A + I) = A^{-1}O$
Using the distributive property and the properties of inverse and identity matrices ($A^{-1}A = I$, $IA = A$, $A^{-1}I = A^{-1}$, $A^{-1}O = O$):
$A^{-1}A^2 - A^{-1}(4A) + A^{-1}I = O$
$(A^{-1}A)A - 4(A^{-1}A) + A^{-1} = O$
$IA - 4I + A^{-1} = O$
$A - 4I + A^{-1} = O$
Now, we solve for $A^{-1}$:
$A^{-1} = 4I - A$
Finally, we calculate $4I - A$:
$A^{-1} = 4 \begin{bmatrix}1&0\\0&1 \end{bmatrix} - \begin{bmatrix}2&3\\1&2 \end{bmatrix}$
$A^{-1} = \begin{bmatrix} 4(1) & 4(0) \\ 4(0) & 4(1) \end{bmatrix} - \begin{bmatrix}2&3\\1&2 \end{bmatrix}$
$A^{-1} = \begin{bmatrix} 4 & 0 \\ 0 & 4 \end{bmatrix} - \begin{bmatrix}2&3\\1&2 \end{bmatrix}$
$A^{-1} = \begin{bmatrix} 4-2 & 0-3 \\ 0-1 & 4-2 \end{bmatrix}$
$A^{-1} = \begin{bmatrix} 2 & -3 \\ -1 & 2 \end{bmatrix}$
Exercise 4.5
Find adjoint of each of the matrices in Exercises 1 and 2.
Question 1. $\begin{bmatrix}1&2\\3&4 \end{bmatrix}$
Answer:
Given:
Let the given matrix be A.
$A = \begin{bmatrix}1&2\\3&4 \end{bmatrix}$
To Find:
Adjoint of matrix A (adj A).
Solution:
For a $2 \times 2$ matrix $M = \begin{bmatrix}a&b\\c&d \end{bmatrix}$, the adjoint of M is given by interchanging the diagonal elements and changing the sign of the off-diagonal elements.
$\text{adj } M = \begin{bmatrix}d&-b\\-c&a \end{bmatrix}$
In our case, for matrix $A = \begin{bmatrix}1&2\\3&4 \end{bmatrix}$, we have $a=1$, $b=2$, $c=3$, and $d=4$.
Applying the formula for the adjoint of a $2 \times 2$ matrix:
$\text{adj } A = \begin{bmatrix}4&-2\\-3&1 \end{bmatrix}$
Question 2. $\begin{bmatrix}1&−1&2\\2&3&5\\−2&0&1 \end{bmatrix}$
Answer:
Given:
Let the given matrix be A.
$A = \begin{bmatrix}1&-1&2\\2&3&5\\-2&0&1 \end{bmatrix}$
To Find:
Adjoint of matrix A (adj A).
Solution:
The adjoint of a matrix A is the transpose of the matrix of its cofactors.
First, we calculate the cofactors $C_{ij} = (-1)^{i+j} M_{ij}$, where $M_{ij}$ is the minor of the element $a_{ij}$.
$C_{11} = (-1)^{1+1} \begin{vmatrix} 3 & 5 \\ 0 & 1 \end{vmatrix} = (1)((3)(1) - (5)(0)) = 3 - 0 = 3$
$C_{12} = (-1)^{1+2} \begin{vmatrix} 2 & 5 \\ -2 & 1 \end{vmatrix} = (-1)((2)(1) - (5)(-2)) = (-1)(2 + 10) $$ = -12$
$C_{13} = (-1)^{1+3} \begin{vmatrix} 2 & 3 \\ -2 & 0 \end{vmatrix} = (1)((2)(0) - (3)(-2)) = 0 + 6 = 6$
$C_{21} = (-1)^{2+1} \begin{vmatrix} -1 & 2 \\ 0 & 1 \end{vmatrix} = (-1)((-1)(1) - (2)(0)) = (-1)(-1 - 0) $$ = 1$
$C_{22} = (-1)^{2+2} \begin{vmatrix} 1 & 2 \\ -2 & 1 \end{vmatrix} = (1)((1)(1) - (2)(-2)) = 1 + 4 = 5$
$C_{23} = (-1)^{2+3} \begin{vmatrix} 1 & -1 \\ -2 & 0 \end{vmatrix} = (-1)((1)(0) - (-1)(-2)) $$ = (-1)(0 - 2) = 2$
$C_{31} = (-1)^{3+1} \begin{vmatrix} -1 & 2 \\ 3 & 5 \end{vmatrix} = (1)((-1)(5) - (2)(3)) = -5 - 6 = -11$
$C_{32} = (-1)^{3+2} \begin{vmatrix} 1 & 2 \\ 2 & 5 \end{vmatrix} = (-1)((1)(5) - (2)(2)) = (-1)(5 - 4) = -1$
$C_{33} = (-1)^{3+3} \begin{vmatrix} 1 & -1 \\ 2 & 3 \end{vmatrix} = (1)((1)(3) - (-1)(2)) = 3 + 2 = 5$
The matrix of cofactors is:
$\text{Cofactor}(A) = \begin{bmatrix} 3 & -12 & 6 \\ 1 & 5 & 2 \\ -11 & -1 & 5 \end{bmatrix}$
The adjoint of A is the transpose of the matrix of cofactors:
$\text{adj } A = (\text{Cofactor}(A))^T = \begin{bmatrix} 3 & 1 & -11 \\ -12 & 5 & -1 \\ 6 & 2 & 5 \end{bmatrix}$
Verify A (adj A) = (adj A) A = | A | I in Exercises 3 and 4
Question 3. $\begin{bmatrix}2&3\\−4&−6 \end{bmatrix}$
Answer:
Given:
Let the given matrix be $A = \begin{bmatrix}2&3\\−4&−6 \end{bmatrix}$.
To Verify:
$A (\text{adj } A) = (\text{adj } A) A = |A| I$.
Verification:
First, we calculate the determinant of A.
$|A| = (2)(-6) - (3)(-4) = -12 + 12 = 0$.
Next, we find the adjoint of A. For a 2x2 matrix $\begin{bmatrix}a&b\\c&d \end{bmatrix}$, the adjoint is $\text{adj}(A) = \begin{bmatrix}d&-b\\-c&a \end{bmatrix}$.
$\text{adj } A = \begin{bmatrix}-6&-3\\-(-4)&2 \end{bmatrix} = \begin{bmatrix}-6&-3\\4&2 \end{bmatrix}$
Now, we compute the products.
1. $A (\text{adj } A)$
$A (\text{adj } A) = \begin{bmatrix}2&3\\-4&-6 \end{bmatrix} \begin{bmatrix}-6&-3\\4&2 \end{bmatrix}$
$A (\text{adj } A) = \begin{bmatrix} 2(-6)+3(4) & 2(-3)+3(2) \\ -4(-6)+(-6)(4) & -4(-3)+(-6)(2) \end{bmatrix}$
$A (\text{adj } A) = \begin{bmatrix} -12+12 & -6+6 \\ 24-24 & 12-12 \end{bmatrix} = \begin{bmatrix} 0&0\\0&0 \end{bmatrix}$
$A (\text{adj } A) = \begin{bmatrix} 0&0\\0&0 \end{bmatrix}$
... (i)
2. $(\text{adj } A) A$
$(\text{adj } A) A = \begin{bmatrix}-6&-3\\4&2 \end{bmatrix} \begin{bmatrix}2&3\\-4&-6 \end{bmatrix}$
$(\text{adj } A) A = \begin{bmatrix} -6(2)+(-3)(-4) & -6(3)+(-3)(-6) \\ 4(2)+2(-4) & 4(3)+2(-6) \end{bmatrix}$
$(\text{adj } A) A = \begin{bmatrix} -12+12 & -18+18 \\ 8-8 & 12-12 \end{bmatrix} = \begin{bmatrix} 0&0\\0&0 \end{bmatrix}$
$(\text{adj } A) A = \begin{bmatrix} 0&0\\0&0 \end{bmatrix}$
... (ii)
3. $|A| I$
Since $|A| = 0$ and I is the identity matrix of order 2, $I = \begin{bmatrix}1&0\\0&1 \end{bmatrix}$.
$|A| I = 0 \cdot \begin{bmatrix}1&0\\0&1 \end{bmatrix} = \begin{bmatrix}0&0\\0&0 \end{bmatrix}$
$|A| I = \begin{bmatrix} 0&0\\0&0 \end{bmatrix}$
... (iii)
From equations (i), (ii), and (iii), we can conclude that:
$A (\text{adj } A) = (\text{adj } A) A = |A| I$.
Hence, verified.
Question 4. $\begin{bmatrix}1&−1&2\\3&0&−2\\1&0&3 \end{bmatrix}$
Answer:
Given:
Let the given matrix be $A = \begin{bmatrix}1&−1&2\\3&0&−2\\1&0&3 \end{bmatrix}$.
To Verify:
$A (\text{adj } A) = (\text{adj } A) A = |A| I$.
Verification:
First, we calculate the determinant of A. Expanding along the second column ($C_2$) is easiest due to the zeros.
$|A| = -(-1) \begin{vmatrix} 3 & -2 \\ 1 & 3 \end{vmatrix} + 0 - 0 = 1 \cdot ((3)(3) - (-2)(1)) = 9 + 2 = 11$.
Next, we find the cofactors to determine the adjoint of A.
$C_{11} = \begin{vmatrix} 0 & -2 \\ 0 & 3 \end{vmatrix} = 0$, $C_{12} = -\begin{vmatrix} 3 & -2 \\ 1 & 3 \end{vmatrix} = -11$, $C_{13} = \begin{vmatrix} 3 & 0 \\ 1 & 0 \end{vmatrix} = 0$
$C_{21} = -\begin{vmatrix} -1 & 2 \\ 0 & 3 \end{vmatrix} = 3$, $C_{22} = \begin{vmatrix} 1 & 2 \\ 1 & 3 \end{vmatrix} = 1$, $C_{23} = -\begin{vmatrix} 1 & -1 \\ 1 & 0 \end{vmatrix} = -1$
$C_{31} = \begin{vmatrix} -1 & 2 \\ 0 & -2 \end{vmatrix} = 2$, $C_{32} = -\begin{vmatrix} 1 & 2 \\ 3 & -2 \end{vmatrix} = 8$, $C_{33} = \begin{vmatrix} 1 & -1 \\ 3 & 0 \end{vmatrix} = 3$
The matrix of cofactors is $\begin{bmatrix} 0 & -11 & 0 \\ 3 & 1 & -1 \\ 2 & 8 & 3 \end{bmatrix}$.
The adjoint of A is the transpose of the cofactor matrix.
$\text{adj } A = \begin{bmatrix} 0 & 3 & 2 \\ -11 & 1 & 8 \\ 0 & -1 & 3 \end{bmatrix}$.
Now, we compute the products.
1. $A (\text{adj } A)$
$A (\text{adj } A) = \begin{bmatrix}1&-1&2\\3&0&-2\\1&0&3 \end{bmatrix} \begin{bmatrix} 0&3&2\\-11&1&8\\0&-1&3 \end{bmatrix} = \begin{bmatrix} 11&0&0\\0&11&0\\0&0&11 \end{bmatrix}$
$A (\text{adj } A) = \begin{bmatrix} 11&0&0\\0&11&0\\0&0&11 \end{bmatrix}$
... (i)
2. $(\text{adj } A) A$
$(\text{adj } A) A = \begin{bmatrix} 0&3&2\\-11&1&8\\0&-1&3 \end{bmatrix} \begin{bmatrix}1&-1&2\\3&0&-2\\1&0&3 \end{bmatrix} = \begin{bmatrix} 11&0&0\\0&11&0\\0&0&11 \end{bmatrix}$
$(\text{adj } A) A = \begin{bmatrix} 11&0&0\\0&11&0\\0&0&11 \end{bmatrix}$
... (ii)
3. $|A| I$
Since $|A| = 11$ and I is the identity matrix of order 3, $I = \begin{bmatrix}1&0&0\\0&1&0\\0&0&1 \end{bmatrix}$.
$|A| I = 11 \cdot \begin{bmatrix}1&0&0\\0&1&0\\0&0&1 \end{bmatrix} = \begin{bmatrix}11&0&0\\0&11&0\\0&0&11 \end{bmatrix}$
$|A| I = \begin{bmatrix} 11&0&0\\0&11&0\\0&0&11 \end{bmatrix}$
... (iii)
From equations (i), (ii), and (iii), we can conclude that:
$A (\text{adj } A) = (\text{adj } A) A = |A| I$.
Hence, verified.
Find the inverse of each of the matrices (if it exists) given in Exercises 5 to 11.
Question 5. $\begin{bmatrix}2&−2\\4&3 \end{bmatrix}$
Answer:
Given:
Let the given matrix be A.
$A = \begin{bmatrix}2&-2\\4&3 \end{bmatrix}$
To Find:
The inverse of matrix A ($A^{-1}$), if it exists.
Solution:
First, we calculate the determinant of A to check if the inverse exists.
$|A| = \det(A) = (2)(3) - (-2)(4)$
$|A| = 6 - (-8)$
$|A| = 6 + 8 = 14$
Since $|A| = 14 \neq 0$, the matrix A is invertible.
Next, we find the adjoint of A. For a $2 \times 2$ matrix $\begin{bmatrix}a&b\\c&d \end{bmatrix}$, the adjoint is $\begin{bmatrix}d&-b\\-c&a \end{bmatrix}$.
$A = \begin{bmatrix}2&-2\\4&3 \end{bmatrix}$
$\text{adj } A = \begin{bmatrix}3&-(-2)\\-4&2 \end{bmatrix} = \begin{bmatrix}3&2\\-4&2 \end{bmatrix}$
The inverse of a matrix A is given by the formula $A^{-1} = \frac{1}{|A|} \text{ adj } A$.
Substituting the determinant and the adjoint matrix:
$A^{-1} = \frac{1}{14} \begin{bmatrix}3&2\\-4&2 \end{bmatrix}$
Multiplying the scalar with the matrix:
$A^{-1} = \begin{bmatrix} \frac{3}{14} & \frac{2}{14} \\ \frac{-4}{14} & \frac{2}{14} \end{bmatrix}$
Simplifying the fractions:
$A^{-1} = \begin{bmatrix} \frac{3}{14} & \frac{1}{7} \\ -\frac{2}{7} & \frac{1}{7} \end{bmatrix}$
Question 6. $\begin{bmatrix}−1&5\\−3&2 \end{bmatrix}$
Answer:
Given:
Let the given matrix be A.
$A = \begin{bmatrix}−1&5\\−3&2 \end{bmatrix}$
To Find:
The inverse of matrix A ($A^{-1}$), if it exists.
Solution:
First, we calculate the determinant of A to check if the inverse exists.
$|A| = \det(A) = (-1)(2) - (5)(-3)$
$|A| = -2 - (-15)$
$|A| = -2 + 15 = 13$
Since $|A| = 13 \neq 0$, the matrix A is invertible.
Next, we find the adjoint of A. For a $2 \times 2$ matrix $\begin{bmatrix}a&b\\c&d \end{bmatrix}$, the adjoint is $\begin{bmatrix}d&-b\\-c&a \end{bmatrix}$.
$A = \begin{bmatrix}−1&5\\−3&2 \end{bmatrix}$
$\text{adj } A = \begin{bmatrix}2&-5\\-(-3)&-1 \end{bmatrix} = \begin{bmatrix}2&-5\\3&-1 \end{bmatrix}$
The inverse of a matrix A is given by the formula $A^{-1} = \frac{1}{|A|} \text{ adj } A$.
Substituting the determinant and the adjoint matrix:
$A^{-1} = \frac{1}{13} \begin{bmatrix}2&-5\\3&-1 \end{bmatrix}$
Multiplying the scalar with the matrix:
$A^{-1} = \begin{bmatrix} \frac{2}{13} & \frac{-5}{13} \\ \frac{3}{13} & \frac{-1}{13} \end{bmatrix}$
Question 7. $\begin{bmatrix}1&2&3\\0&2&4\\0&0&5 \end{bmatrix}$
Answer:
Given:
Let the given matrix be A.
$A = \begin{bmatrix}1&2&3\\0&2&4\\0&0&5 \end{bmatrix}$
To Find:
The inverse of matrix A ($A^{-1}$), if it exists.
Solution:
First, we calculate the determinant of A to check if the inverse exists.
Since A is an upper triangular matrix, its determinant is the product of its diagonal elements.
$|A| = \det(A) = (1)(2)(5)$
$|A| = 10$
Since $|A| = 10 \neq 0$, the matrix A is invertible.
Next, we find the adjoint of A by finding the transpose of the matrix of cofactors.
We calculate the cofactors $C_{ij} = (-1)^{i+j} M_{ij}$, where $M_{ij}$ is the minor of the element $a_{ij}$.
$C_{11} = (-1)^{1+1} \begin{vmatrix} 2 & 4 \\ 0 & 5 \end{vmatrix} = (1)((2)(5) - (4)(0)) = 10 - 0 = 10$
$C_{12} = (-1)^{1+2} \begin{vmatrix} 0 & 4 \\ 0 & 5 \end{vmatrix} = (-1)((0)(5) - (4)(0)) = -(0 - 0) = 0$
$C_{13} = (-1)^{1+3} \begin{vmatrix} 0 & 2 \\ 0 & 0 \end{vmatrix} = (1)((0)(0) - (2)(0)) = 0 - 0 = 0$
$C_{21} = (-1)^{2+1} \begin{vmatrix} 2 & 3 \\ 0 & 5 \end{vmatrix} = (-1)((2)(5) - (3)(0)) = -(10 - 0) = -10$
$C_{22} = (-1)^{2+2} \begin{vmatrix} 1 & 3 \\ 0 & 5 \end{vmatrix} = (1)((1)(5) - (3)(0)) = 5 - 0 = 5$
$C_{23} = (-1)^{2+3} \begin{vmatrix} 1 & 2 \\ 0 & 0 \end{vmatrix} = (-1)((1)(0) - (2)(0)) = -(0 - 0) = 0$
$C_{31} = (-1)^{3+1} \begin{vmatrix} 2 & 3 \\ 2 & 4 \end{vmatrix} = (1)((2)(4) - (3)(2)) = 8 - 6 = 2$
$C_{32} = (-1)^{3+2} \begin{vmatrix} 1 & 3 \\ 0 & 4 \end{vmatrix} = (-1)((1)(4) - (3)(0)) = -(4 - 0) = -4$
$C_{33} = (-1)^{3+3} \begin{vmatrix} 1 & 2 \\ 0 & 2 \end{vmatrix} = (1)((1)(2) - (2)(0)) = 2 - 0 = 2$
The matrix of cofactors is:
$\text{Cofactor}(A) = \begin{bmatrix} 10 & 0 & 0 \\ -10 & 5 & 0 \\ 2 & -4 & 2 \end{bmatrix}$
The adjoint of A is the transpose of the matrix of cofactors:
$\text{adj } A = (\text{Cofactor}(A))^T = \begin{bmatrix} 10 & -10 & 2 \\ 0 & 5 & -4 \\ 0 & 0 & 2 \end{bmatrix}$ ... (i)
The inverse of a matrix A is given by the formula $A^{-1} = \frac{1}{|A|} \text{ adj } A$.
Substituting the determinant $|A| = 10$ and the adjoint matrix from (i):
$A^{-1} = \frac{1}{10} \begin{bmatrix} 10 & -10 & 2 \\ 0 & 5 & -4 \\ 0 & 0 & 2 \end{bmatrix}$
Multiplying the scalar $\frac{1}{10}$ with each element of the matrix:
$A^{-1} = \begin{bmatrix} \frac{10}{10} & \frac{-10}{10} & \frac{2}{10} \\ \frac{0}{10} & \frac{5}{10} & \frac{-4}{10} \\ \frac{0}{10} & \frac{0}{10} & \frac{2}{10} \end{bmatrix}$
Simplifying the fractions:
$A^{-1} = \begin{bmatrix} 1 & -1 & \frac{1}{5} \\ 0 & \frac{1}{2} & -\frac{2}{5} \\ 0 & 0 & \frac{1}{5} \end{bmatrix}$ ... (ii)
Question 8. $\begin{bmatrix}1&0&0\\3&3&0\\5&2&−1 \end{bmatrix}$
Answer:
Given:
Let the given matrix be A.
$A = \begin{bmatrix}1&0&0\\3&3&0\\5&2&−1 \end{bmatrix}$
To Find:
The inverse of matrix A ($A^{-1}$), if it exists.
Solution:
First, we calculate the determinant of A to check if the inverse exists.
Since A is a lower triangular matrix, its determinant is the product of its diagonal elements.
$|A| = \det(A) = (1)(3)(-1)$
$|A| = -3$
Since $|A| = -3 \neq 0$, the matrix A is invertible.
Next, we find the adjoint of A by finding the transpose of the matrix of cofactors.
We calculate the cofactors $C_{ij} = (-1)^{i+j} M_{ij}$, where $M_{ij}$ is the minor of the element $a_{ij}$.
$C_{11} = (-1)^{1+1} \begin{vmatrix} 3 & 0 \\ 2 & -1 \end{vmatrix} = (1)((3)(-1) - (0)(2)) = -3 - 0 = -3$
$C_{12} = (-1)^{1+2} \begin{vmatrix} 3 & 0 \\ 5 & -1 \end{vmatrix} = (-1)((3)(-1) - (0)(5)) = (-1)(-3 - 0) $$ = 3$
$C_{13} = (-1)^{1+3} \begin{vmatrix} 3 & 3 \\ 5 & 2 \end{vmatrix} = (1)((3)(2) - (3)(5)) = 6 - 15 = -9$
$C_{21} = (-1)^{2+1} \begin{vmatrix} 0 & 0 \\ 2 & -1 \end{vmatrix} = (-1)((0)(-1) - (0)(2)) = (-1)(0 - 0) $$ = 0$
$C_{22} = (-1)^{2+2} \begin{vmatrix} 1 & 0 \\ 5 & -1 \end{vmatrix} = (1)((1)(-1) - (0)(5)) = -1 - 0 = -1$
$C_{23} = (-1)^{2+3} \begin{vmatrix} 1 & 0 \\ 5 & 2 \end{vmatrix} = (-1)((1)(2) - (0)(5)) = (-1)(2 - 0) = -2$
$C_{31} = (-1)^{3+1} \begin{vmatrix} 0 & 0 \\ 3 & 0 \end{vmatrix} = (1)((0)(0) - (0)(3)) = 0 - 0 = 0$
$C_{32} = (-1)^{3+2} \begin{vmatrix} 1 & 0 \\ 3 & 0 \end{vmatrix} = (-1)((1)(0) - (0)(3)) = (-1)(0 - 0) = 0$
$C_{33} = (-1)^{3+3} \begin{vmatrix} 1 & 0 \\ 3 & 3 \end{vmatrix} = (1)((1)(3) - (0)(3)) = 3 - 0 = 3$
The matrix of cofactors is:
$\text{Cofactor}(A) = \begin{bmatrix} -3 & 3 & -9 \\ 0 & -1 & -2 \\ 0 & 0 & 3 \end{bmatrix}$
The adjoint of A is the transpose of the matrix of cofactors:
$\text{adj } A = (\text{Cofactor}(A))^T = \begin{bmatrix} -3 & 0 & 0 \\ 3 & -1 & 0 \\ -9 & -2 & 3 \end{bmatrix}$ ... (i)
The inverse of a matrix A is given by the formula $A^{-1} = \frac{1}{|A|} \text{ adj } A$.
Substituting the determinant $|A| = -3$ and the adjoint matrix from (i):
$A^{-1} = \frac{1}{-3} \begin{bmatrix} -3 & 0 & 0 \\ 3 & -1 & 0 \\ -9 & -2 & 3 \end{bmatrix}$
Multiplying the scalar $\frac{1}{-3}$ with each element of the matrix:
$A^{-1} = \begin{bmatrix} \frac{-3}{-3} & \frac{0}{-3} & \frac{0}{-3} \\ \frac{3}{-3} & \frac{-1}{-3} & \frac{0}{-3} \\ \frac{-9}{-3} & \frac{-2}{-3} & \frac{3}{-3} \end{bmatrix}$
Simplifying the fractions:
$A^{-1} = \begin{bmatrix} 1 & 0 & 0 \\ -1 & \frac{1}{3} & 0 \\ 3 & \frac{2}{3} & -1 \end{bmatrix}$ ... (ii)
Question 9. $\begin{bmatrix}2&1&3\\4&−1&0\\−7&2&1 \end{bmatrix}$
Answer:
Given:
Let the given matrix be A.
$A = \begin{bmatrix}2&1&3\\4&−1&0\\−7&2&1 \end{bmatrix}$
To Find:
The inverse of matrix A ($A^{-1}$), if it exists.
Solution:
First, we calculate the determinant of A to check if the inverse exists.
$|A| = \det(A) = \begin{vmatrix}2&1&3\\4&−1&0\\−7&2&1 \end{vmatrix}$
Expanding along the third column (since it has a zero):
$|A| = 3 \cdot C_{13} + 0 \cdot C_{23} + 1 \cdot C_{33}$
$C_{13} = (-1)^{1+3} \begin{vmatrix} 4 & -1 \\ -7 & 2 \end{vmatrix} = (1)((4)(2) - (-1)(-7)) = 8 - 7 = 1$
$C_{23} = (-1)^{2+3} \begin{vmatrix} 2 & 1 \\ -7 & 2 \end{vmatrix} = (-1)((2)(2) - (1)(-7)) = (-1)(4 + 7) $$ = -11$
$C_{33} = (-1)^{3+3} \begin{vmatrix} 2 & 1 \\ 4 & -1 \end{vmatrix} = (1)((2)(-1) - (1)(4)) = -2 - 4 = -6$
$|A| = 3(1) + 0(-11) + 1(-6) = 3 + 0 - 6 = -3$
Since $|A| = -3 \neq 0$, the matrix A is invertible.
Next, we find the adjoint of A by finding the transpose of the matrix of cofactors.
$C_{11} = (-1)^{1+1} \begin{vmatrix} -1 & 0 \\ 2 & 1 \end{vmatrix} = (1)((-1)(1) - (0)(2)) = -1 - 0 = -1$
$C_{12} = (-1)^{1+2} \begin{vmatrix} 4 & 0 \\ -7 & 1 \end{vmatrix} = (-1)((4)(1) - (0)(-7)) = (-1)(4 - 0) $$ = -4$
$C_{13} = (-1)^{1+3} \begin{vmatrix} 4 & -1 \\ -7 & 2 \end{vmatrix} = (1)((4)(2) - (-1)(-7)) = 8 - 7 = 1$
$C_{21} = (-1)^{2+1} \begin{vmatrix} 1 & 3 \\ 2 & 1 \end{vmatrix} = (-1)((1)(1) - (3)(2)) = (-1)(1 - 6) = 5$
$C_{22} = (-1)^{2+2} \begin{vmatrix} 2 & 3 \\ -7 & 1 \end{vmatrix} = (1)((2)(1) - (3)(-7)) = 2 + 21 = 23$
$C_{23} = (-1)^{2+3} \begin{vmatrix} 2 & 1 \\ -7 & 2 \end{vmatrix} = (-1)((2)(2) - (1)(-7)) = (-1)(4 + 7) $$ = -11$
$C_{31} = (-1)^{3+1} \begin{vmatrix} 1 & 3 \\ -1 & 0 \end{vmatrix} = (1)((1)(0) - (3)(-1)) = 0 + 3 = 3$
$C_{32} = (-1)^{3+2} \begin{vmatrix} 2 & 3 \\ 4 & 0 \end{vmatrix} = (-1)((2)(0) - (3)(4)) = (-1)(0 - 12) = 12$
$C_{33} = (-1)^{3+3} \begin{vmatrix} 2 & 1 \\ 4 & -1 \end{vmatrix} = (1)((2)(-1) - (1)(4)) = -2 - 4 = -6$
The matrix of cofactors is:
$\text{Cofactor}(A) = \begin{bmatrix} -1 & -4 & 1 \\ 5 & 23 & -11 \\ 3 & 12 & -6 \end{bmatrix}$
The adjoint of A is the transpose of the matrix of cofactors:
$\text{adj } A = (\text{Cofactor}(A))^T = \begin{bmatrix} -1 & 5 & 3 \\ -4 & 23 & 12 \\ 1 & -11 & -6 \end{bmatrix}$ ... (i)
The inverse of a matrix A is given by the formula $A^{-1} = \frac{1}{|A|} \text{ adj } A$.
Substituting the determinant $|A| = -3$ and the adjoint matrix from (i):
$A^{-1} = \frac{1}{-3} \begin{bmatrix} -1 & 5 & 3 \\ -4 & 23 & 12 \\ 1 & -11 & -6 \end{bmatrix}$
Multiplying the scalar $\frac{1}{-3}$ with each element of the matrix:
$A^{-1} = \begin{bmatrix} \frac{-1}{-3} & \frac{5}{-3} & \frac{3}{-3} \\ \frac{-4}{-3} & \frac{23}{-3} & \frac{12}{-3} \\ \frac{1}{-3} & \frac{-11}{-3} & \frac{-6}{-3} \end{bmatrix}$
Simplifying the fractions:
$A^{-1} = \begin{bmatrix} \frac{1}{3} & -\frac{5}{3} & -1 \\ \frac{4}{3} & -\frac{23}{3} & -4 \\ -\frac{1}{3} & \frac{11}{3} & 2 \end{bmatrix}$ ... (ii)
Question 10. $\begin{bmatrix}1&−1&2\\0&2&−3\\3&−2&4 \end{bmatrix}$
Answer:
Given:
Let the given matrix be A.
$A = \begin{bmatrix}1&−1&2\\0&2&−3\\3&−2&4 \end{bmatrix}$
To Find:
The inverse of matrix A ($A^{-1}$), if it exists.
Solution:
First, we calculate the determinant of A to check if the inverse exists.
$|A| = \det(A) = \begin{vmatrix}1&−1&2\\0&2&−3\\3&−2&4 \end{vmatrix}$
Expanding along the first column:
$|A| = 1 \cdot \begin{vmatrix} 2 & -3 \\ -2 & 4 \end{vmatrix} - 0 \cdot \begin{vmatrix} -1 & 2 \\ -2 & 4 \end{vmatrix} + 3 \cdot \begin{vmatrix} -1 & 2 \\ 2 & -3 \end{vmatrix}$
$|A| = 1((2)(4) - (-3)(-2)) - 0 + 3((-1)(-3) - (2)(2))$
$|A| = 1(8 - 6) + 3(3 - 4)$
$|A| = 1(2) + 3(-1)$
$|A| = 2 - 3 = -1$
Since $|A| = -1 \neq 0$, the matrix A is invertible.
Next, we find the adjoint of A by finding the transpose of the matrix of cofactors.
We calculate the cofactors $C_{ij} = (-1)^{i+j} M_{ij}$, where $M_{ij}$ is the minor of the element $a_{ij}$.
$C_{11} = (-1)^{1+1} \begin{vmatrix} 2 & -3 \\ -2 & 4 \end{vmatrix} = (1)((2)(4) - (-3)(-2)) = 8 - 6 = 2$
$C_{12} = (-1)^{1+2} \begin{vmatrix} 0 & -3 \\ 3 & 4 \end{vmatrix} = (-1)((0)(4) - (-3)(3)) = (-1)(0 + 9) $$ = -9$
$C_{13} = (-1)^{1+3} \begin{vmatrix} 0 & 2 \\ 3 & -2 \end{vmatrix} = (1)((0)(-2) - (2)(3)) = 0 - 6 = -6$
$C_{21} = (-1)^{2+1} \begin{vmatrix} -1 & 2 \\ -2 & 4 \end{vmatrix} = (-1)((-1)(4) - (2)(-2)) = (-1)(-4 + 4) $$ = 0$
$C_{22} = (-1)^{2+2} \begin{vmatrix} 1 & 2 \\ 3 & 4 \end{vmatrix} = (1)((1)(4) - (2)(3)) = 4 - 6 = -2$
$C_{23} = (-1)^{2+3} \begin{vmatrix} 1 & -1 \\ 3 & -2 \end{vmatrix} = (-1)((1)(-2) - (-1)(3)) = (-1)(-2 + 3) $$ = -1$
$C_{31} = (-1)^{3+1} \begin{vmatrix} -1 & 2 \\ 2 & -3 \end{vmatrix} = (1)((-1)(-3) - (2)(2)) = 3 - 4 = -1$
$C_{32} = (-1)^{3+2} \begin{vmatrix} 1 & 2 \\ 0 & -3 \end{vmatrix} = (-1)((1)(-3) - (2)(0)) = (-1)(-3 - 0) $$ = 3$
$C_{33} = (-1)^{3+3} \begin{vmatrix} 1 & -1 \\ 0 & 2 \end{vmatrix} = (1)((1)(2) - (-1)(0)) = 2 - 0 = 2$
The matrix of cofactors is:
$\text{Cofactor}(A) = \begin{bmatrix} 2 & -9 & -6 \\ 0 & -2 & -1 \\ -1 & 3 & 2 \end{bmatrix}$
The adjoint of A is the transpose of the matrix of cofactors:
$\text{adj } A = (\text{Cofactor}(A))^T = \begin{bmatrix} 2 & 0 & -1 \\ -9 & -2 & 3 \\ -6 & -1 & 2 \end{bmatrix}$ ... (i)
The inverse of a matrix A is given by the formula $A^{-1} = \frac{1}{|A|} \text{ adj } A$.
Substituting the determinant $|A| = -1$ and the adjoint matrix from (i):
$A^{-1} = \frac{1}{-1} \begin{bmatrix} 2 & 0 & -1 \\ -9 & -2 & 3 \\ -6 & -1 & 2 \end{bmatrix}$
Multiplying the scalar $\frac{1}{-1} = -1$ with each element of the matrix:
$A^{-1} = -1 \begin{bmatrix} 2 & 0 & -1 \\ -9 & -2 & 3 \\ -6 & -1 & 2 \end{bmatrix} = \begin{bmatrix} -2 & 0 & 1 \\ 9 & 2 & -3 \\ 6 & 1 & -2 \end{bmatrix}$ ... (ii)
Question 11. $\begin{bmatrix} 1&0&0\\0&\cosα&\sinα\\0&\sinα&−\cosα \end{bmatrix}$
Answer:
Given:
The matrix $A = \begin{bmatrix}1&0&0\\0&\cos\alpha&\sin\alpha\\0&\sin\alpha&−\cos\alpha\end{bmatrix}$
To Find:
The inverse of the matrix A, denoted by $A^{-1}$.
Solution:
First, let's calculate the determinant of the matrix A to check if the inverse exists.
Expanding along the first row:
$|A| = 1 \begin{vmatrix} \cos\alpha & \sin\alpha \\ \sin\alpha & -\cos\alpha \end{vmatrix} - 0 + 0$
$|A| = 1 ((\cos\alpha)(-\cos\alpha) - (\sin\alpha)(\sin\alpha))$
$|A| = -\cos^2\alpha - \sin^2\alpha$
$|A| = -(\cos^2\alpha + \sin^2\alpha)$
Using the trigonometric identity $\cos^2\alpha + \sin^2\alpha = 1$, we get:
$|A| = -1$
Since $|A| \neq 0$, the inverse of the matrix exists.
We can find the inverse using the property of orthogonal matrices. A matrix is orthogonal if $A A^T = I$. For an orthogonal matrix, the inverse is equal to its transpose, i.e., $A^{-1} = A^T$.
Let's find the transpose of A, denoted as $A^T$.
$A^T = \begin{bmatrix}1&0&0\\0&\cos\alpha&\sin\alpha\\0&\sin\alpha&−\cos\alpha\end{bmatrix}^T = \begin{bmatrix}1&0&0\\0&\cos\alpha&\sin\alpha\\0&\sin\alpha&−\cos\alpha\end{bmatrix}$
In this case, the matrix is symmetric, so $A = A^T$.
Now, let's check if A is orthogonal by calculating $A A^T$.
$A A^T = \begin{bmatrix}1&0&0\\0&\cos\alpha&\sin\alpha\\0&\sin\alpha&−\cos\alpha\end{bmatrix} \begin{bmatrix}1&0&0\\0&\cos\alpha&\sin\alpha\\0&\sin\alpha&−\cos\alpha\end{bmatrix}$
$A A^T = \begin{bmatrix} 1 & 0 & 0 \\ 0 & \cos^2\alpha + \sin^2\alpha & \cos\alpha\sin\alpha - \sin\alpha\cos\alpha \\ 0 & \sin\alpha\cos\alpha - \cos\alpha\sin\alpha & \sin^2\alpha + \cos^2\alpha \end{bmatrix}$
$A A^T = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} = I$
Since $A A^T = I$, the matrix A is orthogonal.
Therefore, $A^{-1} = A^T$. As we found that $A = A^T$, it implies that $A^{-1} = A$.
So, the inverse of the given matrix is the matrix itself.
$\mathbf{A^{-1} = \begin{bmatrix}1&0&0\\0&\cos\alpha&\sin\alpha\\0&\sin\alpha&−\cos\alpha\end{bmatrix}}$
Alternate Solution (Using Adjoint Method):
We can find the inverse using the formula $A^{-1} = \frac{1}{|A|} \text{adj}(A)$.
We have already calculated $|A| = -1$.
Now we find the matrix of cofactors:
$C_{11} = \begin{vmatrix} \cos\alpha & \sin\alpha \\ \sin\alpha & -\cos\alpha \end{vmatrix} = -1$
$C_{12} = -\begin{vmatrix} 0 & \sin\alpha \\ 0 & -\cos\alpha \end{vmatrix} = 0$
$C_{13} = \begin{vmatrix} 0 & \cos\alpha \\ 0 & \sin\alpha \end{vmatrix} = 0$
$C_{21} = -\begin{vmatrix} 0 & 0 \\ \sin\alpha & -\cos\alpha \end{vmatrix} = 0$
$C_{22} = \begin{vmatrix} 1 & 0 \\ 0 & -\cos\alpha \end{vmatrix} = -\cos\alpha$
$C_{23} = -\begin{vmatrix} 1 & 0 \\ 0 & \sin\alpha \end{vmatrix} = -\sin\alpha$
$C_{31} = \begin{vmatrix} 0 & 0 \\ \cos\alpha & \sin\alpha \end{vmatrix} = 0$
$C_{32} = -\begin{vmatrix} 1 & 0 \\ 0 & \sin\alpha \end{vmatrix} = -\sin\alpha$
$C_{33} = \begin{vmatrix} 1 & 0 \\ 0 & \cos\alpha \end{vmatrix} = \cos\alpha$
The matrix of cofactors is $\begin{bmatrix} -1 & 0 & 0 \\ 0 & -\cos\alpha & -\sin\alpha \\ 0 & -\sin\alpha & \cos\alpha \end{bmatrix}$.
The adjoint of A, adj(A), is the transpose of the cofactor matrix.
$\text{adj}(A) = \begin{bmatrix} -1 & 0 & 0 \\ 0 & -\cos\alpha & -\sin\alpha \\ 0 & -\sin\alpha & \cos\alpha \end{bmatrix}^T = \begin{bmatrix} -1 & 0 & 0 \\ 0 & -\cos\alpha & -\sin\alpha \\ 0 & -\sin\alpha & \cos\alpha \end{bmatrix}$
Now, we find the inverse:
$A^{-1} = \frac{1}{|A|} \text{adj}(A) = \frac{1}{-1} \begin{bmatrix} -1 & 0 & 0 \\ 0 & -\cos\alpha & -\sin\alpha \\ 0 & -\sin\alpha & \cos\alpha \end{bmatrix}$
$A^{-1} = \begin{bmatrix} 1 & 0 & 0 \\ 0 & \cos\alpha & \sin\alpha \\ 0 & \sin\alpha & -\cos\alpha \end{bmatrix}$
Both methods yield the same result.
Question 12. Let A = $\begin{bmatrix}3&7\\2&5 \end{bmatrix}$ and B = $\begin{bmatrix}6&8\\7&9 \end{bmatrix}$ . Verify that (AB)-1 = B-1A-1.
Answer:
Given:
Matrix $A = \begin{bmatrix}3&7\\2&5 \end{bmatrix}$
Matrix $B = \begin{bmatrix}6&8\\7&9 \end{bmatrix}$
To Verify:
$(AB)^{-1} = B^{-1}A^{-1}$
Solution:
To verify the property $(AB)^{-1} = B^{-1}A^{-1}$, we need to calculate the inverse of matrix $A$, the inverse of matrix $B$, the product $AB$, the inverse of $AB$, and finally the product $B^{-1}A^{-1}$, and then compare the results.
For a $2 \times 2$ matrix $M = \begin{bmatrix}a&b\\c&d\end{bmatrix}$, the inverse $M^{-1}$ is given by $M^{-1} = \frac{1}{\det(M)} \begin{bmatrix}d&-b\\-c&a\end{bmatrix}$, where the determinant $\det(M) = ad - bc$. The inverse exists if and only if $\det(M) \neq 0$.
Step 1: Find $A^{-1}$
First, calculate the determinant of $A$:
$\det(A) = (3)(5) - (7)(2) = 15 - 14 = 1$
Since $\det(A) = 1 \neq 0$, the inverse $A^{-1}$ exists.
Now, find the adjoint of $A$ (by swapping the diagonal elements and negating the off-diagonal elements):
$\text{adj}(A) = \begin{bmatrix}5&-7\\-2&3\end{bmatrix}$
Calculate $A^{-1}$:
$A^{-1} = \frac{1}{\det(A)} \text{adj}(A) = \frac{1}{1} \begin{bmatrix}5&-7\\-2&3\end{bmatrix} = \begin{bmatrix}5&-7\\-2&3\end{bmatrix}$
So, $A^{-1} = \begin{bmatrix}5&-7\\-2&3\end{bmatrix}$.
Step 2: Find $B^{-1}$
First, calculate the determinant of $B$:
$\det(B) = (6)(9) - (8)(7) = 54 - 56 = -2$
Since $\det(B) = -2 \neq 0$, the inverse $B^{-1}$ exists.
Now, find the adjoint of $B$:
$\text{adj}(B) = \begin{bmatrix}9&-8\\-7&6\end{bmatrix}$
Calculate $B^{-1}$:
$B^{-1} = \frac{1}{\det(B)} \text{adj}(B) = \frac{1}{-2} \begin{bmatrix}9&-8\\-7&6\end{bmatrix} = \begin{bmatrix}\frac{9}{-2}&\frac{-8}{-2}\\\frac{-7}{-2}&\frac{6}{-2}\end{bmatrix} = \begin{bmatrix}-\frac{9}{2}&4\\\frac{7}{2}&-3\end{bmatrix}$
So, $B^{-1} = \begin{bmatrix}-\frac{9}{2}&4\\\frac{7}{2}&-3\end{bmatrix}$.
Step 3: Find $AB$
Multiply matrix $A$ by matrix $B$:
$AB = \begin{bmatrix}3&7\\2&5 \end{bmatrix} \begin{bmatrix}6&8\\7&9 \end{bmatrix}$
$AB = \begin{bmatrix}(3)(6) + (7)(7) & (3)(8) + (7)(9)\\(2)(6) + (5)(7) & (2)(8) + (5)(9)\end{bmatrix}$
$AB = \begin{bmatrix}18 + 49 & 24 + 63\\12 + 35 & 16 + 45\end{bmatrix}$
$AB = \begin{bmatrix}67&87\\47&61\end{bmatrix}$
So, $AB = \begin{bmatrix}67&87\\47&61\end{bmatrix}$.
Step 4: Find $(AB)^{-1}$
First, calculate the determinant of $AB$:
$\det(AB) = (67)(61) - (87)(47) = 4087 - 4089 = -2$
Since $\det(AB) = -2 \neq 0$, the inverse $(AB)^{-1}$ exists.
Now, find the adjoint of $AB$:
$\text{adj}(AB) = \begin{bmatrix}61&-87\\-47&67\end{bmatrix}$
Calculate $(AB)^{-1}$:
$(AB)^{-1} = \frac{1}{\det(AB)} \text{adj}(AB) = \frac{1}{-2} \begin{bmatrix}61&-87\\-47&67\end{bmatrix} = \begin{bmatrix}\frac{61}{-2}&\frac{-87}{-2}\\\frac{-47}{-2}&\frac{67}{-2}\end{bmatrix} $$ = \begin{bmatrix}-\frac{61}{2}&\frac{87}{2}\\\frac{47}{2}&-\frac{67}{2}\end{bmatrix}$
So, $(AB)^{-1} = \begin{bmatrix}-\frac{61}{2}&\frac{87}{2}\\\frac{47}{2}&-\frac{67}{2}\end{bmatrix}$.
Step 5: Find $B^{-1}A^{-1}$
Multiply matrix $B^{-1}$ by matrix $A^{-1}$:
$B^{-1}A^{-1} = \begin{bmatrix}-\frac{9}{2}&4\\\frac{7}{2}&-3\end{bmatrix} \begin{bmatrix}5&-7\\-2&3\end{bmatrix}$
$B^{-1}A^{-1} = \begin{bmatrix}(-\frac{9}{2})(5) + (4)(-2) & (-\frac{9}{2})(-7) + (4)(3)\\( \frac{7}{2})(5) + (-3)(-2) & ( \frac{7}{2})(-7) + (-3)(3)\end{bmatrix}$
$B^{-1}A^{-1} = \begin{bmatrix}-\frac{45}{2} - 8 & \frac{63}{2} + 12\\ \frac{35}{2} + 6 & -\frac{49}{2} - 9\end{bmatrix}$
To add/subtract the fractions, find common denominators:
$B^{-1}A^{-1} = \begin{bmatrix}-\frac{45}{2} - \frac{16}{2} & \frac{63}{2} + \frac{24}{2}\\ \frac{35}{2} + \frac{12}{2} & -\frac{49}{2} - \frac{18}{2}\end{bmatrix}$
$B^{-1}A^{-1} = \begin{bmatrix}\frac{-45 - 16}{2} & \frac{63 + 24}{2}\\ \frac{35 + 12}{2} & \frac{-49 - 18}{2}\end{bmatrix}$
$B^{-1}A^{-1} = \begin{bmatrix}-\frac{61}{2}&\frac{87}{2}\\\frac{47}{2}&-\frac{67}{2}\end{bmatrix}$
So, $B^{-1}A^{-1} = \begin{bmatrix}-\frac{61}{2}&\frac{87}{2}\\\frac{47}{2}&-\frac{67}{2}\end{bmatrix}$.
Step 6: Comparison
From Step 4, we have $(AB)^{-1} = \begin{bmatrix}-\frac{61}{2}&\frac{87}{2}\\\frac{47}{2}&-\frac{67}{2}\end{bmatrix}$.
From Step 5, we have $B^{-1}A^{-1} = \begin{bmatrix}-\frac{61}{2}&\frac{87}{2}\\\frac{47}{2}&-\frac{67}{2}\end{bmatrix}$.
Comparing the two results, we see that the matrices are identical.
Conclusion:
Since $(AB)^{-1}$ and $B^{-1}A^{-1}$ are equal,
$\begin{bmatrix}-\frac{61}{2}&\frac{87}{2}\\\frac{47}{2}&-\frac{67}{2}\end{bmatrix} = \begin{bmatrix}-\frac{61}{2}&\frac{87}{2}\\\frac{47}{2}&-\frac{67}{2}\end{bmatrix}$
The property $(AB)^{-1} = B^{-1}A^{-1}$ is successfully verified for the given matrices $A$ and $B$.
Question 13. If A = $\begin{bmatrix}3&1\\−1&2 \end{bmatrix}$ , show that A2 – 5A + 7I = O. Hence find A–1.
Answer:
Given:
Matrix $A = \begin{bmatrix}3&1\\-1&2 \end{bmatrix}$
$I$ is the identity matrix of order $2 \times 2$, i.e., $I = \begin{bmatrix}1&0\\0&1 \end{bmatrix}$.
$O$ is the zero matrix of order $2 \times 2$, i.e., $O = \begin{bmatrix}0&0\\0&0 \end{bmatrix}$.
To Show:
$A^2 - 5A + 7I = O$
To Find:
$A^{-1}$ using the given equation.
Solution:
First, we calculate $A^2$:
$A^2 = A \times A = \begin{bmatrix}3&1\\-1&2 \end{bmatrix} \begin{bmatrix}3&1\\-1&2 \end{bmatrix}$
$A^2 = \begin{bmatrix}(3)(3) + (1)(-1) & (3)(1) + (1)(2)\\(-1)(3) + (2)(-1) & (-1)(1) + (2)(2)\end{bmatrix}$
$A^2 = \begin{bmatrix}9 - 1 & 3 + 2\\-3 - 2 & -1 + 4\end{bmatrix}$
$A^2 = \begin{bmatrix}8&5\\-5&3\end{bmatrix}$
Next, we calculate $5A$:
$5A = 5 \begin{bmatrix}3&1\\-1&2 \end{bmatrix} = \begin{bmatrix}5 \times 3 & 5 \times 1\\5 \times -1 & 5 \times 2 \end{bmatrix}$
$5A = \begin{bmatrix}15&5\\-5&10\end{bmatrix}$
Now, we calculate $7I$:
$7I = 7 \begin{bmatrix}1&0\\0&1 \end{bmatrix} = \begin{bmatrix}7 \times 1 & 7 \times 0\\7 \times 0 & 7 \times 1 \end{bmatrix}$
$7I = \begin{bmatrix}7&0\\0&7\end{bmatrix}$
Now, we substitute these results into the expression $A^2 - 5A + 7I$:
$A^2 - 5A + 7I = \begin{bmatrix}8&5\\-5&3\end{bmatrix} - \begin{bmatrix}15&5\\-5&10\end{bmatrix} + \begin{bmatrix}7&0\\0&7\end{bmatrix}$
$A^2 - 5A + 7I = \begin{bmatrix}8 - 15&5 - 5\\-5 - (-5)&3 - 10\end{bmatrix} + \begin{bmatrix}7&0\\0&7\end{bmatrix}$
$A^2 - 5A + 7I = \begin{bmatrix}-7&0\\-5 + 5&-7\end{bmatrix} + \begin{bmatrix}7&0\\0&7\end{bmatrix}$
$A^2 - 5A + 7I = \begin{bmatrix}-7&0\\0&-7\end{bmatrix} + \begin{bmatrix}7&0\\0&7\end{bmatrix}$
$A^2 - 5A + 7I = \begin{bmatrix}-7 + 7&0 + 0\\0 + 0&-7 + 7\end{bmatrix}$
$A^2 - 5A + 7I = \begin{bmatrix}0&0\\0&0\end{bmatrix}$
This is the zero matrix $O$.
Thus, we have shown that $A^2 - 5A + 7I = O$.
Finding $A^{-1}$ using the equation:
We have the equation:
$A^2 - 5A + 7I = O$
To find $A^{-1}$, we can multiply the entire equation by $A^{-1}$ from the left. Note that since the determinant of A is $(3)(2) - (1)(-1) = 6 + 1 = 7 \neq 0$, $A^{-1}$ exists.
$A^{-1}(A^2 - 5A + 7I) = A^{-1}O$
Using the distributive property of matrix multiplication:
$A^{-1}A^2 - A^{-1}(5A) + A^{-1}(7I) = O$
Rearranging terms and using properties $A^{-1}A = I$, $A^{-1}IA = A^{-1}A = I$, $A^{-1}I = A^{-1}$ and $IX = X$:
$(A^{-1}A)A - 5(A^{-1}A) + 7(A^{-1}I) = O$
$IA - 5I + 7A^{-1} = O$
$A - 5I + 7A^{-1} = O$
Now, we solve for $A^{-1}$:
$7A^{-1} = 5I - A$
$A^{-1} = \frac{1}{7}(5I - A)$
Now, we calculate the matrix $5I - A$:
$5I - A = 5\begin{bmatrix}1&0\\0&1 \end{bmatrix} - \begin{bmatrix}3&1\\-1&2 \end{bmatrix}$
$5I - A = \begin{bmatrix}5&0\\0&5 \end{bmatrix} - \begin{bmatrix}3&1\\-1&2 \end{bmatrix}$
$5I - A = \begin{bmatrix}5 - 3&0 - 1\\0 - (-1)&5 - 2 \end{bmatrix}$
$5I - A = \begin{bmatrix}2&-1\\1&3 \end{bmatrix}$
Finally, we calculate $A^{-1}$:
$A^{-1} = \frac{1}{7} \begin{bmatrix}2&-1\\1&3 \end{bmatrix}$
$A^{-1} = \begin{bmatrix}\frac{2}{7}&-\frac{1}{7}\\\frac{1}{7}&\frac{3}{7} \end{bmatrix}$
Conclusion:
We have shown that $A^2 - 5A + 7I = O$.
Using this equation, the inverse of matrix A is found to be:
$A^{-1} = \begin{bmatrix}\frac{2}{7}&-\frac{1}{7}\\\frac{1}{7}&\frac{3}{7} \end{bmatrix}$
Question 14. For the matrix A = $\begin{bmatrix}3&2\\1&1 \end{bmatrix}$ , find the numbers a and b such that A2 + aA + bI = O.
Answer:
Given:
Matrix $A = \begin{bmatrix}3&2\\1&1 \end{bmatrix}$
The equation $A^2 + aA + bI = O$, where $I$ is the identity matrix and $O$ is the zero matrix.
To Find:
The values of the numbers $a$ and $b$.
Solution:
We are given the equation $A^2 + aA + bI = O$. We need to calculate each term on the left side and then equate the sum to the zero matrix to find $a$ and $b$.
The identity matrix $I$ of the same order as $A$ ($2 \times 2$) is $\begin{bmatrix}1&0\\0&1 \end{bmatrix}$.
The zero matrix $O$ of the same order ($2 \times 2$) is $\begin{bmatrix}0&0\\0&0 \end{bmatrix}$.
First, calculate $A^2$:
$A^2 = A \times A = \begin{bmatrix}3&2\\1&1 \end{bmatrix} \begin{bmatrix}3&2\\1&1 \end{bmatrix}$
$A^2 = \begin{bmatrix}(3)(3) + (2)(1) & (3)(2) + (2)(1)\\(1)(3) + (1)(1) & (1)(2) + (1)(1)\end{bmatrix}$
$A^2 = \begin{bmatrix}9 + 2 & 6 + 2\\3 + 1 & 2 + 1\end{bmatrix} = \begin{bmatrix}11&8\\4&3\end{bmatrix}$
Next, calculate $aA$:
$aA = a \begin{bmatrix}3&2\\1&1 \end{bmatrix} = \begin{bmatrix}a \times 3&a \times 2\\a \times 1&a \times 1\end{bmatrix} = \begin{bmatrix}3a&2a\\a&a\end{bmatrix}$
Next, calculate $bI$:
$bI = b \begin{bmatrix}1&0\\0&1 \end{bmatrix} = \begin{bmatrix}b \times 1&b \times 0\\b \times 0&b \times 1\end{bmatrix} = \begin{bmatrix}b&0\\0&b\end{bmatrix}$
Now, substitute these results into the given equation $A^2 + aA + bI = O$:
$\begin{bmatrix}11&8\\4&3\end{bmatrix} + \begin{bmatrix}3a&2a\\a&a\end{bmatrix} + \begin{bmatrix}b&0\\0&b\end{bmatrix} = \begin{bmatrix}0&0\\0&0\end{bmatrix}$
Perform the matrix addition on the left side:
$\begin{bmatrix}11 + 3a + b & 8 + 2a + 0\\4 + a + 0 & 3 + a + b\end{bmatrix} = \begin{bmatrix}0&0\\0&0\end{bmatrix}$
$\begin{bmatrix}11 + 3a + b & 8 + 2a\\4 + a & 3 + a + b\end{bmatrix} = \begin{bmatrix}0&0\\0&0\end{bmatrix}$
For two matrices to be equal, their corresponding elements must be equal. This gives us a system of linear equations:
$11 + 3a + b = 0 \quad$...(1)
$8 + 2a = 0 \quad$...(2)
$4 + a = 0 \quad$...(3)
$3 + a + b = 0 \quad$...(4)
From equation (2):
$2a = -8$
$a = \frac{-8}{2} = -4$
From equation (3):
$a = -4$
The value of $a$ is consistent from both equations.
Now substitute $a = -4$ into equation (4):
$3 + (-4) + b = 0$
$3 - 4 + b = 0$
$-1 + b = 0$
$b = 1$
Let's verify these values of $a$ and $b$ using equation (1):
$11 + 3a + b = 11 + 3(-4) + 1 = 11 - 12 + 1 = -1 + 1 = 0$.
The values $a=-4$ and $b=1$ satisfy all the equations.
Conclusion:
The numbers $a$ and $b$ such that $A^2 + aA + bI = O$ are $a = -4$ and $b = 1$.
Question 15. For the matrix A = $\begin{bmatrix}1&1&1\\1&2&−3\\2&−1&3 \end{bmatrix}$
Show that A3 – 6A2 + 5A + 11 I = O. Hence, find A–1.
Answer:
Given:
The matrix $A = \begin{bmatrix}1&1&1\\1&2&−3\\2&−1&3 \end{bmatrix}$.
Part 1: To Show that A3 – 6A2 + 5A + 11I = O
Proof:
First, we calculate A2.
$A^2 = A \cdot A = \begin{bmatrix}1&1&1\\1&2&−3\\2&−1&3 \end{bmatrix} \begin{bmatrix}1&1&1\\1&2&−3\\2&−1&3 \end{bmatrix}$
$A^2 = \begin{bmatrix} 1(1)+1(1)+1(2) & 1(1)+1(2)+1(-1) & 1(1)+1(-3)+1(3) \\ 1(1)+2(1)+(-3)(2) & 1(1)+2(2)+(-3)(-1) & 1(1)+2(-3)+(-3)(3) \\ 2(1)+(-1)(1)+3(2) & 2(1)+(-1)(2)+3(-1) & 2(1)+(-1)(-3)+3(3) \end{bmatrix}$
$A^2 = \begin{bmatrix} 1+1+2 & 1+2-1 & 1-3+3 \\ 1+2-6 & 1+4+3 & 1-6-9 \\ 2-1+6 & 2-2-3 & 2+3+9 \end{bmatrix} = \begin{bmatrix}4&2&1\\−3&8&−14\\7&−3&14 \end{bmatrix}$
Next, we calculate A3.
$A^3 = A^2 \cdot A = \begin{bmatrix}4&2&1\\−3&8&−14\\7&−3&14 \end{bmatrix} \begin{bmatrix}1&1&1\\1&2&−3\\2&−1&3 \end{bmatrix}$
$A^3 = \begin{bmatrix} 4(1)+2(1)+1(2) & 4(1)+2(2)+1(-1) & 4(1)+2(-3)+1(3) \\ -3(1)+8(1)+(-14)(2) & -3(1)+8(2)+(-14)(-1) & -3(1)+8(-3)+(-14)(3) \\ 7(1)+(-3)(1)+14(2) & 7(1)+(-3)(2)+14(-1) & 7(1)+(-3)(-3)+14(3) \end{bmatrix}$
$A^3 = \begin{bmatrix} 4+2+2 & 4+4-1 & 4-6+3 \\ -3+8-28 & -3+16+14 & -3-24-42 \\ 7-3+28 & 7-6-14 & 7+9+42 \end{bmatrix} $$ = \begin{bmatrix}8&7&1\\−23&27&−69\\32&−13&58 \end{bmatrix}$
Now, we substitute A3, A2, A and I into the expression A3 – 6A2 + 5A + 11I:
LHS = $\begin{bmatrix}8&7&1\\−23&27&−69\\32&−13&58 \end{bmatrix} - 6\begin{bmatrix}4&2&1\\−3&8&−14\\7&−3&14 \end{bmatrix} + 5\begin{bmatrix}1&1&1\\1&2&−3\\2&−1&3 \end{bmatrix} + 11\begin{bmatrix}1&0&0\\0&1&0\\0&0&1 \end{bmatrix}$
LHS = $\begin{bmatrix}8&7&1\\−23&27&−69\\32&−13&58 \end{bmatrix} - \begin{bmatrix}24&12&6\\−18&48&−84\\42&−18&84 \end{bmatrix} + \begin{bmatrix}5&5&5\\5&10&−15\\10&−5&15 \end{bmatrix} + \begin{bmatrix}11&0&0\\0&11&0\\0&0&11 \end{bmatrix}$
LHS = $\begin{bmatrix} 8-24+5+11 & 7-12+5+0 & 1-6+5+0 \\ -23-(-18)+5+0 & 27-48+10+11 & -69-(-84)-15+0 \\ 32-42+10+0 & -13-(-18)-5+0 & 58-84+15+11 \end{bmatrix}$
LHS = $\begin{bmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix} = O$
Thus, it is shown that A3 – 6A2 + 5A + 11I = O.
Part 2: To Find A–1
Solution:
We start with the proven equation:
$A^3 – 6A^2 + 5A + 11I = O$
Post-multiplying the entire equation by A–1 (since $|A| \neq 0$, A–1 exists):
$(A^3 – 6A^2 + 5A + 11I) A^{–1} = O \cdot A^{–1}$
$A^3 A^{–1} – 6A^2 A^{–1} + 5A A^{–1} + 11I A^{–1} = O$
Using the properties $A A^{–1} = I$ and $I A^{–1} = A^{–1}$:
$A^2(A A^{–1}) – 6A(A A^{–1}) + 5(A A^{–1}) + 11A^{–1} = O$
$A^2 I – 6A I + 5I + 11A^{–1} = O$
$A^2 – 6A + 5I + 11A^{–1} = O$
Now, we isolate the term A–1:
$11A^{–1} = -A^2 + 6A - 5I$
$A^{–1} = \frac{1}{11}(-A^2 + 6A - 5I)$
Substitute the matrices A2, A, and I into this expression:
$A^{–1} = \frac{1}{11} \left( -\begin{bmatrix}4&2&1\\−3&8&−14\\7&−3&14 \end{bmatrix} + 6\begin{bmatrix}1&1&1\\1&2&−3\\2&−1&3 \end{bmatrix} - 5\begin{bmatrix}1&0&0\\0&1&0\\0&0&1 \end{bmatrix} \right)$
$A^{–1} = \frac{1}{11} \left( \begin{bmatrix}-4&-2&-1\\3&-8&14\\-7&3&-14 \end{bmatrix} + \begin{bmatrix}6&6&6\\6&12&-18\\12&-6&18 \end{bmatrix} - \begin{bmatrix}5&0&0\\0&5&0\\0&0&5 \end{bmatrix} \right)$
$A^{–1} = \frac{1}{11} \begin{bmatrix} -4+6-5 & -2+6-0 & -1+6-0 \\ 3+6-0 & -8+12-5 & 14-18-0 \\ -7+12-0 & 3-6-0 & -14+18-5 \end{bmatrix}$
$A^{–1} = \frac{1}{11} \begin{bmatrix}-3&4&5\\9&−1&−4\\5&−3&−1 \end{bmatrix}$
Therefore, the inverse of A is:
$\mathbf{A^{–1} = \begin{bmatrix} -\frac{3}{11} & \frac{4}{11} & \frac{5}{11} \\ \frac{9}{11} & -\frac{1}{11} & -\frac{4}{11} \\ \frac{5}{11} & -\frac{3}{11} & -\frac{1}{11} \end{bmatrix}}$
Question 16. If A = $\begin{bmatrix}2&−1&1\\−1&2&−1\\1&−1&2 \end{bmatrix}$
Verify that A3 – 6A2 + 9A – 4I = O and hence find A–1
Answer:
Given:
The matrix $A = \begin{bmatrix}2&−1&1\\−1&2&−1\\1&−1&2 \end{bmatrix}$.
Part 1: To Verify that A3 – 6A2 + 9A – 4I = O
Verification:
First, we calculate A2.
$A^2 = A \cdot A = \begin{bmatrix}2&−1&1\\−1&2&−1\\1&−1&2 \end{bmatrix} \begin{bmatrix}2&−1&1\\−1&2&−1\\1&−1&2 \end{bmatrix}$
$A^2 = \begin{bmatrix} 4+1+1 & -2-2-1 & 2+1+2 \\ -2-2-1 & 1+4+1 & -1-2-2 \\ 2+1+2 & -1-2-2 & 1+1+4 \end{bmatrix} = \begin{bmatrix}6&−5&5\\−5&6&−5\\5&−5&6 \end{bmatrix}$
Next, we calculate A3.
$A^3 = A^2 \cdot A = \begin{bmatrix}6&−5&5\\−5&6&−5\\5&−5&6 \end{bmatrix} \begin{bmatrix}2&−1&1\\−1&2&−1\\1&−1&2 \end{bmatrix}$
$A^3 = \begin{bmatrix} 12+5+5 & -6-10-5 & 6+5+10 \\ -10-6-5 & 5+12+5 & -5-6-10 \\ 10+5+6 & -5-10-6 & 5+5+12 \end{bmatrix} = \begin{bmatrix}22&−21&21\\−21&22&−21\\21&−21&22 \end{bmatrix}$
Now, we substitute the calculated matrices into the expression A3 – 6A2 + 9A – 4I:
LHS = $\begin{bmatrix}22&−21&21\\−21&22&−21\\21&−21&22 \end{bmatrix} - 6\begin{bmatrix}6&−5&5\\−5&6&−5\\5&−5&6 \end{bmatrix} + 9\begin{bmatrix}2&−1&1\\−1&2&−1\\1&−1&2 \end{bmatrix} - 4\begin{bmatrix}1&0&0\\0&1&0\\0&0&1 \end{bmatrix}$
LHS = $\begin{bmatrix}22&−21&21\\−21&22&−21\\21&−21&22 \end{bmatrix} - \begin{bmatrix}36&−30&30\\−30&36&−30\\30&−30&36 \end{bmatrix} + \begin{bmatrix}18&−9&9\\−9&18&−9\\9&−9&18 \end{bmatrix} - \begin{bmatrix}4&0&0\\0&4&0\\0&0&4 \end{bmatrix}$
LHS = $\begin{bmatrix} 22-36+18-4 & -21-(-30)-9-0 & 21-30+9-0 \\ -21-(-30)-9-0 & 22-36+18-4 & -21-(-30)-9-0 \\ 21-30+9-0 & -21-(-30)-9-0 & 22-36+18-4 \end{bmatrix}$
LHS = $\begin{bmatrix} 40-40 & -21+30-9 & 30-30 \\ -21+30-9 & 40-40 & -21+30-9 \\ 30-30 & -21+30-9 & 40-40 \end{bmatrix}$
LHS = $\begin{bmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix} = O$
Thus, it is verified that A3 – 6A2 + 9A – 4I = O.
Part 2: To Find A–1
Solution:
We start with the verified equation:
$A^3 – 6A^2 + 9A – 4I = O$
To find the inverse, we first check if A is invertible by calculating its determinant.
$|A| = 2(2 \cdot 2 - (-1)(-1)) - (-1)((-1)(2) - (-1)(1)) + 1((-1)(-1) - 2(1))$
$|A| = 2(4-1) + 1(-2+1) + 1(1-2) = 2(3) + 1(-1) + 1(-1) = 6-1-1 = 4$.
Since $|A| \neq 0$, A–1 exists.
From the equation $A^3 – 6A^2 + 9A – 4I = O$, we can write:
$A^3 – 6A^2 + 9A = 4I$
Post-multiplying the entire equation by A–1:
$(A^3 – 6A^2 + 9A) A^{–1} = 4I A^{–1}$
$A^2(A A^{–1}) – 6A(A A^{–1}) + 9(A A^{–1}) = 4A^{–1}$
$A^2 I – 6A I + 9I = 4A^{–1}$
$A^2 – 6A + 9I = 4A^{–1}$
Isolating A–1:
$A^{–1} = \frac{1}{4}(A^2 – 6A + 9I)$
Substitute the matrices A2, A, and I:
$A^{–1} = \frac{1}{4} \left( \begin{bmatrix}6&−5&5\\−5&6&−5\\5&−5&6 \end{bmatrix} - 6\begin{bmatrix}2&−1&1\\−1&2&−1\\1&−1&2 \end{bmatrix} + 9\begin{bmatrix}1&0&0\\0&1&0\\0&0&1 \end{bmatrix} \right)$
$A^{–1} = \frac{1}{4} \left( \begin{bmatrix}6&−5&5\\−5&6&−5\\5&−5&6 \end{bmatrix} - \begin{bmatrix}12&−6&6\\−6&12&−6\\6&−6&12 \end{bmatrix} + \begin{bmatrix}9&0&0\\0&9&0\\0&0&9 \end{bmatrix} \right)$
$A^{–1} = \frac{1}{4} \begin{bmatrix} 6-12+9 & -5-(-6)+0 & 5-6+0 \\ -5-(-6)+0 & 6-12+9 & -5-(-6)+0 \\ 5-6+0 & -5-(-6)+0 & 6-12+9 \end{bmatrix}$
$A^{–1} = \frac{1}{4} \begin{bmatrix}3&1&−1\\1&3&1\\−1&1&3 \end{bmatrix}$
Therefore, the inverse of A is:
$\mathbf{A^{–1} = \begin{bmatrix} \frac{3}{4} & \frac{1}{4} & -\frac{1}{4} \\ \frac{1}{4} & \frac{3}{4} & \frac{1}{4} \\ -\frac{1}{4} & \frac{1}{4} & \frac{3}{4} \end{bmatrix}}$
Question 17. Let A be a nonsingular square matrix of order 3 × 3. Then |adj A| is equal to
(A) | A |
(B) | A |2
(C) | A |3
(D) 3 | A |
Answer:
Given:
$A$ is a nonsingular square matrix of order $3 \times 3$.
A nonsingular matrix means its determinant is non-zero, i.e., $|A| \neq 0$.
To Find:
The value of $|\text{adj} A|$.
Solution:
We know the property that for any square matrix $A$ of order $n$, the product of the matrix and its adjoint is given by:
$A (\text{adj} A) = (\text{adj} A) A = |A| I_n$
where $I_n$ is the identity matrix of order $n$.
In this problem, the order of the matrix $A$ is $n=3$. So, $I_n$ is the $3 \times 3$ identity matrix $I_3 = \begin{bmatrix}1&0&0\\0&1&0\\0&0&1\end{bmatrix}$.
Thus, we have:
$A (\text{adj} A) = |A| I_3$
Now, we take the determinant of both sides of the equation:
$|A (\text{adj} A)| = ||A| I_3|$
Using the property of determinants $|AB| = |A| |B|$:
$|A| |\text{adj} A| = ||A| I_3|$
For a scalar $k$ and an identity matrix $I_n$ of order $n$, the determinant is $|k I_n| = k^n$. In this case, the scalar is $|A|$ and the order is $n=3$.
So, $||A| I_3| = |A|^3$.
Substituting this back into the equation:
$|A| |\text{adj} A| = |A|^3$
Since $A$ is a nonsingular matrix, $|A| \neq 0$. We can divide both sides of the equation by $|A|$:
$|\text{adj} A| = \frac{|A|^3}{|A|}$
$|\text{adj} A| = |A|^{3-1}$
$|\text{adj} A| = |A|^2$
The general formula for the determinant of the adjoint of a square matrix $A$ of order $n$ is $|\text{adj} A| = |A|^{n-1}$. For $n=3$, this gives $|\text{adj} A| = |A|^{3-1} = |A|^2$.
Conclusion:
The value of $|\text{adj} A|$ is $|A|^2$. This corresponds to option (B).
The final answer is (B) | A |2.
Question 18. If A is an invertible matrix of order 2, then det (A–1) is equal to
(A) det (A)
(B) $\frac{1}{det (A)}$
(C) 1
(D) 0
Answer:
Given:
$A$ is an invertible square matrix of order $2 \times 2$.
An invertible matrix is also known as a nonsingular matrix, which means its determinant is non-zero, i.e., $\det(A) = |A| \neq 0$.
To Find:
The value of $\det(A^{-1})$.
Solution:
By the definition of an inverse matrix, for an invertible matrix $A$, there exists a matrix $A^{-1}$ such that:
$A A^{-1} = I$
where $I$ is the identity matrix of the same order as $A$. In this case, since $A$ is of order 2, $I = I_2 = \begin{bmatrix}1&0\\0&1\end{bmatrix}$.
Taking the determinant of both sides of the equation $A A^{-1} = I$:
$\det(A A^{-1}) = \det(I)$
Using the property of determinants which states that the determinant of a product of matrices is the product of their determinants, i.e., $\det(AB) = \det(A) \det(B)$, we can write the left side as:
$\det(A) \det(A^{-1}) = \det(I)$
We also know that the determinant of an identity matrix of any order is 1, i.e., $\det(I) = 1$.
Substituting this value into the equation:
$\det(A) \det(A^{-1}) = 1$
Since $A$ is an invertible matrix, $\det(A) \neq 0$. Therefore, we can divide both sides of the equation by $\det(A)$ to solve for $\det(A^{-1})$:
$\det(A^{-1}) = \frac{1}{\det(A)}$
Conclusion:
For an invertible matrix $A$ of order 2 (or any order $n$), the determinant of its inverse $A^{-1}$ is equal to the reciprocal of the determinant of $A$.
$\det(A^{-1}) = \frac{1}{\det(A)}$
This corresponds to option (B).
The final answer is (B) $\frac{1}{det (A)}$.
Example 27 to 29 (Before Exercise 4.6)
Example 27: Solve the system of equations
2x + 5y = 1
3x + 2y = 7
Answer:
Given:
The system of linear equations:
$2x + 5y = 1$
$3x + 2y = 7$
Solution:
We can solve the given system of equations using the matrix method. The system of equations can be written in the matrix form $AX = B$, where:
$A = \begin{bmatrix}2&5\\3&2\end{bmatrix}$, $X = \begin{bmatrix}x\\y\end{bmatrix}$, and $B = \begin{bmatrix}1\\7\end{bmatrix}$
The solution to the matrix equation $AX = B$ is given by $X = A^{-1}B$, provided that the matrix $A$ is invertible (i.e., its determinant is non-zero).
First, we calculate the determinant of matrix $A$:
$\det(A) = (2)(2) - (5)(3)$
$\det(A) = 4 - 15 = -11$
Since $\det(A) = -11 \neq 0$, the matrix $A$ is nonsingular, and its inverse $A^{-1}$ exists.
Next, we find the inverse of matrix $A$. For a $2 \times 2$ matrix $\begin{bmatrix}a&b\\c&d\end{bmatrix}$, the inverse is $\frac{1}{ad-bc}\begin{bmatrix}d&-b\\-c&a\end{bmatrix}$.
$A^{-1} = \frac{1}{\det(A)} \text{adj}(A)$
$A^{-1} = \frac{1}{-11} \begin{bmatrix}2&-5\\-3&2\end{bmatrix}$
Now, we calculate $X = A^{-1}B$:
$X = \begin{bmatrix}x\\y\end{bmatrix} = \frac{1}{-11} \begin{bmatrix}2&-5\\-3&2\end{bmatrix} \begin{bmatrix}1\\7\end{bmatrix}$
Perform the matrix multiplication:
$\begin{bmatrix}x\\y\end{bmatrix} = -\frac{1}{11} \begin{bmatrix}(2)(1) + (-5)(7)\\(-3)(1) + (2)(7)\end{bmatrix}$
$\begin{bmatrix}x\\y\end{bmatrix} = -\frac{1}{11} \begin{bmatrix}2 - 35\\-3 + 14\end{bmatrix}$
$\begin{bmatrix}x\\y\end{bmatrix} = -\frac{1}{11} \begin{bmatrix}-33\\11\end{bmatrix}$
Multiply the scalar $-\frac{1}{11}$ by each element in the matrix:
$\begin{bmatrix}x\\y\end{bmatrix} = \begin{bmatrix}(-\frac{1}{11})(-33)\\(-\frac{1}{11})(11)\end{bmatrix}$
$\begin{bmatrix}x\\y\end{bmatrix} = \begin{bmatrix}\frac{33}{11}\\-\frac{11}{11}\end{bmatrix}$
$\begin{bmatrix}x\\y\end{bmatrix} = \begin{bmatrix}3\\-1\end{bmatrix}$
Equating the corresponding elements, we get the values of $x$ and $y$.
Conclusion:
The solution to the given system of equations is $x = 3$ and $y = -1$.
Example 28: Solve the following system of equations by matrix method.
3x – 2y + 3z = 8
2x + y – z = 1
4x – 3y + 2z = 4
Answer:
Given:
The system of linear equations:
$3x - 2y + 3z = 8$
$2x + y - z = 1$
$4x - 3y + 2z = 4$
Solution:
We will solve the given system of equations using the matrix method. The system can be written in the matrix form $AX = B$, where:
$A = \begin{bmatrix}3&-2&3\\2&1&-1\\4&-3&2\end{bmatrix}$, $X = \begin{bmatrix}x\\y\\z\end{bmatrix}$, and $B = \begin{bmatrix}8\\1\\4\end{bmatrix}$
The solution is given by $X = A^{-1}B$, provided that the matrix $A$ is invertible.
First, we calculate the determinant of $A$ to check if it is invertible:
$\det(A) = 3 \begin{vmatrix}1&-1\\-3&2\end{vmatrix} - (-2) \begin{vmatrix}2&-1\\4&2\end{vmatrix} + 3 \begin{vmatrix}2&1\\4&-3\end{vmatrix}$
$\det(A) = 3((1)(2) - (-1)(-3)) + 2((2)(2) - (-1)(4)) $$ + 3((2)(-3) - (1)(4))$
$\det(A) = 3(2 - 3) + 2(4 + 4) + 3(-6 - 4)$
$\det(A) = 3(-1) + 2(8) + 3(-10)$
$\det(A) = -3 + 16 - 30$
$\det(A) = 13 - 30 = -17$
Since $\det(A) = -17 \neq 0$, the matrix $A$ is nonsingular, and its inverse $A^{-1}$ exists. Thus, the system has a unique solution.
Next, we find the adjoint of $A$, denoted as $\text{adj}(A)$. This is the transpose of the matrix of cofactors of $A$.
Calculate the cofactors $C_{ij}$ of each element $a_{ij}$ in $A$:
$C_{11} = + \begin{vmatrix}1&-1\\-3&2\end{vmatrix} = (1)(2) - (-1)(-3) = 2 - 3 = -1$
$C_{12} = - \begin{vmatrix}2&-1\\4&2\end{vmatrix} = -((2)(2) - (-1)(4)) = -(4 + 4) = -8$
$C_{13} = + \begin{vmatrix}2&1\\4&-3\end{vmatrix} = (2)(-3) - (1)(4) = -6 - 4 = -10$
$C_{21} = - \begin{vmatrix}-2&3\\-3&2\end{vmatrix} = -((-2)(2) - (3)(-3)) = -(-4 + 9) = -5$
$C_{22} = + \begin{vmatrix}3&3\\4&2\end{vmatrix} = (3)(2) - (3)(4) = 6 - 12 = -6$
$C_{23} = - \begin{vmatrix}3&-2\\4&-3\end{vmatrix} = -((3)(-3) - (-2)(4)) = -(-9 + 8) = 1$
$C_{31} = + \begin{vmatrix}-2&3\\1&-1\end{vmatrix} = (-2)(-1) - (3)(1) = 2 - 3 = -1$
$C_{32} = - \begin{vmatrix}3&3\\2&-1\end{vmatrix} = -((3)(-1) - (3)(2)) = -(-3 - 6) = 9$
$C_{33} = + \begin{vmatrix}3&-2\\2&1\end{vmatrix} = (3)(1) - (-2)(2) = 3 + 4 = 7$
The matrix of cofactors is $\begin{bmatrix}-1&-8&-10\\-5&-6&1\\-1&9&7\end{bmatrix}$.
The adjoint of $A$ is the transpose of the cofactor matrix:
$\text{adj}(A) = \begin{bmatrix}-1&-5&-1\\-8&-6&9\\-10&1&7\end{bmatrix}$
Now, we calculate the inverse of $A$:
$A^{-1} = \frac{1}{\det(A)} \text{adj}(A) = \frac{1}{-17} \begin{bmatrix}-1&-5&-1\\-8&-6&9\\-10&1&7\end{bmatrix} $$ = -\frac{1}{17} \begin{bmatrix}-1&-5&-1\\-8&-6&9\\-10&1&7\end{bmatrix} $$ = \frac{1}{17} \begin{bmatrix}1&5&1\\8&6&-9\\10&-1&-7\end{bmatrix}$
Finally, we solve for $X$ using $X = A^{-1}B$:
$X = \begin{bmatrix}x\\y\\z\end{bmatrix} = \frac{1}{17} \begin{bmatrix}1&5&1\\8&6&-9\\10&-1&-7\end{bmatrix} \begin{bmatrix}8\\1\\4\end{bmatrix}$
Perform the matrix multiplication:
$X = \frac{1}{17} \begin{bmatrix} (1)(8) + (5)(1) + (1)(4) \\ (8)(8) + (6)(1) + (-9)(4) \\ (10)(8) + (-1)(1) + (-7)(4) \end{bmatrix}$
$X = \frac{1}{17} \begin{bmatrix} 8 + 5 + 4 \\ 64 + 6 - 36 \\ 80 - 1 - 28 \end{bmatrix}$
$X = \frac{1}{17} \begin{bmatrix} 17 \\ 70 - 36 \\ 79 - 28 \end{bmatrix}$
$X = \frac{1}{17} \begin{bmatrix} 17 \\ 34 \\ 51 \end{bmatrix}$
Multiply by the scalar $\frac{1}{17}$:
$X = \begin{bmatrix} \frac{17}{17} \\ \frac{34}{17} \\ \frac{51}{17} \end{bmatrix} = \begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix}$
Equating the elements of the matrix $X$, we get:
$x = 1$
$y = 2$
$z = 3$
Conclusion:
The solution to the given system of equations is $x = 1$, $y = 2$, and $z = 3$.
Example 29: The sum of three numbers is 6. If we multiply third number by 3 and add second number to it, we get 11. By adding first and third numbers, we get double of the second number. Represent it algebraically and find the numbers using matrix method.
Answer:
Given:
The following conditions regarding three numbers:
1. The sum of the three numbers is 6.
2. The sum of the second number and three times the third number is 11.
3. The sum of the first and third numbers is double the second number.
To Find:
The three numbers using the matrix method.
Solution:
Let the three numbers be $x$, $y$, and $z$. We translate the given conditions into a system of linear equations:
From the first condition:
$x + y + z = 6$
From the second condition:
$y + 3z = 11$
From the third condition:
$x + z = 2y$
Rearranging the equations into standard form ($ax + by + cz = d$):
$x + y + z = 6$
$0x + y + 3z = 11$
$x - 2y + z = 0$
This system of equations can be written in the matrix form $AX = B$, where:
$A = \begin{bmatrix}1&1&1\\0&1&3\\1&-2&1\end{bmatrix}$, $X = \begin{bmatrix}x\\y\\z\end{bmatrix}$, and $B = \begin{bmatrix}6\\11\\0\end{bmatrix}$
The solution is given by $X = A^{-1}B$, provided that the matrix $A$ is invertible.
First, we calculate the determinant of $A$:
$\det(A) = 1 \begin{vmatrix}1&3\\-2&1\end{vmatrix} - 1 \begin{vmatrix}0&3\\1&1\end{vmatrix} + 1 \begin{vmatrix}0&1\\1&-2\end{vmatrix}$
$\det(A) = 1((1)(1) - (3)(-2)) - 1((0)(1) - (3)(1)) + 1((0)(-2) $$ - (1)(1))$
$\det(A) = 1(1 + 6) - 1(0 - 3) + 1(0 - 1)$
$\det(A) = 7 + 3 - 1 = 9$
Since $\det(A) = 9 \neq 0$, the matrix $A$ is nonsingular, and its inverse $A^{-1}$ exists. Thus, the system has a unique solution.
Next, we find the adjoint of $A$, $\text{adj}(A)$, which is the transpose of the matrix of cofactors.
The cofactors $C_{ij}$ are:
$C_{11} = \begin{vmatrix}1&3\\-2&1\end{vmatrix} = 7$
$C_{12} = -\begin{vmatrix}0&3\\1&1\end{vmatrix} = 3$
$C_{13} = \begin{vmatrix}0&1\\1&-2\end{vmatrix} = -1$
$C_{21} = -\begin{vmatrix}1&1\\-2&1\end{vmatrix} = -3$
$C_{22} = \begin{vmatrix}1&1\\1&1\end{vmatrix} = 0$
$C_{23} = -\begin{vmatrix}1&1\\1&-2\end{vmatrix} = 3$
$C_{31} = \begin{vmatrix}1&1\\1&3\end{vmatrix} = 2$
$C_{32} = -\begin{vmatrix}1&1\\0&3\end{vmatrix} = -3$
$C_{33} = \begin{vmatrix}1&1\\0&1\end{vmatrix} = 1$
The matrix of cofactors is $\begin{bmatrix}7&3&-1\\-3&0&3\\2&-3&1\end{bmatrix}$.
The adjoint of $A$ is the transpose of this matrix:
$\text{adj}(A) = \begin{bmatrix}7&-3&2\\3&0&-3\\-1&3&1\end{bmatrix}$
Now, we calculate the inverse of $A$:
$A^{-1} = \frac{1}{\det(A)} \text{adj}(A) = \frac{1}{9} \begin{bmatrix}7&-3&2\\3&0&-3\\-1&3&1\end{bmatrix}$
Finally, we solve for $X$ using $X = A^{-1}B$:
$X = \begin{bmatrix}x\\y\\z\end{bmatrix} = \frac{1}{9} \begin{bmatrix}7&-3&2\\3&0&-3\\-1&3&1\end{bmatrix} \begin{bmatrix}6\\11\\0\end{bmatrix}$
Perform the matrix multiplication:
$X = \frac{1}{9} \begin{bmatrix} (7)(6) + (-3)(11) + (2)(0) \\ (3)(6) + (0)(11) + (-3)(0) \\ (-1)(6) + (3)(11) + (1)(0) \end{bmatrix}$
$X = \frac{1}{9} \begin{bmatrix} 42 - 33 + 0 \\ 18 + 0 + 0 \\ -6 + 33 + 0 \end{bmatrix}$
$X = \frac{1}{9} \begin{bmatrix} 9 \\ 18 \\ 27 \end{bmatrix}$
Multiply by the scalar $\frac{1}{9}$:
$X = \begin{bmatrix} \frac{9}{9} \\ \frac{18}{9} \\ \frac{27}{9} \end{bmatrix} = \begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix}$
Equating the elements, we find $x=1$, $y=2$, and $z=3$.
Conclusion:
The three numbers are 1, 2, and 3.
Exercise 4.6
Examine the consistency of the system of equations in Exercises 1 to 6
Question 1.
x + 2y = 2
2x + 3y = 3
Answer:
Given:
The system of linear equations:
$x + 2y = 2$
$2x + 3y = 3$
To Examine:
The consistency of the given system of equations.
Solution:
The given system of equations can be written in the matrix form $AX = B$, where:
$A = \begin{bmatrix}1&2\\2&3\end{bmatrix}$, $X = \begin{bmatrix}x\\y\end{bmatrix}$, and $B = \begin{bmatrix}2\\3\end{bmatrix}$
To determine the consistency of the system, we first calculate the determinant of the coefficient matrix $A$.
$\det(A) = (1)(3) - (2)(2)$
$\det(A) = 3 - 4$
$\det(A) = -1$
Since $\det(A) = -1 \neq 0$, the matrix $A$ is a nonsingular matrix.
For a system of linear equations $AX = B$, if $\det(A) \neq 0$, then the system is consistent and has a unique solution given by $X = A^{-1}B$.
Conclusion:
Since the determinant of the coefficient matrix is non-zero, the given system of equations is consistent.
Question 2.
2x – y = 5
x + y = 4
Answer:
Given:
The system of linear equations:
$2x - y = 5$
$x + y = 4$
To Examine:
The consistency of the given system of equations.
Solution:
The given system of equations can be written in the matrix form $AX = B$, where:
$A = \begin{bmatrix}2&-1\\1&1\end{bmatrix}$, $X = \begin{bmatrix}x\\y\end{bmatrix}$, and $B = \begin{bmatrix}5\\4\end{bmatrix}$
To determine the consistency of the system, we calculate the determinant of the coefficient matrix $A$.
$\det(A) = (2)(1) - (-1)(1)$
$\det(A) = 2 - (-1)$
$\det(A) = 2 + 1 = 3$
Since $\det(A) = 3 \neq 0$, the matrix $A$ is a nonsingular matrix.
For a system of linear equations $AX = B$, if $\det(A) \neq 0$, then the matrix $A$ is invertible ($A^{-1}$ exists), and the system is consistent and has a unique solution given by $X = A^{-1}B$.
Conclusion:
Since the determinant of the coefficient matrix is non-zero, the given system of equations is consistent.
Question 3.
x + 3y = 5
2x + 6y = 8
Answer:
Given:
The system of linear equations:
$x + 3y = 5$
$2x + 6y = 8$
To Examine:
The consistency of the given system of equations.
Solution:
We can write the given system of equations in the matrix form $AX = B$, where:
$A = \begin{bmatrix}1&3\\2&6\end{bmatrix}$, $X = \begin{bmatrix}x\\y\end{bmatrix}$, and $B = \begin{bmatrix}5\\8\end{bmatrix}$
To determine the consistency of the system, we first calculate the determinant of the coefficient matrix $A$.
$\det(A) = (1)(6) - (3)(2)$
$\det(A) = 6 - 6 = 0$
Since $\det(A) = 0$, the matrix $A$ is a singular matrix. In this case, we need to calculate the product of the adjoint of $A$ and the matrix $B$, i.e., $(\text{adj} A)B$, to determine the consistency.
First, we find the adjoint of the $2 \times 2$ matrix $A = \begin{bmatrix}a&b\\c&d\end{bmatrix}$, which is $\text{adj}(A) = \begin{bmatrix}d&-b\\-c&a\end{bmatrix}$.
So, $\text{adj}(A) = \begin{bmatrix}6&-3\\-2&1\end{bmatrix}$.
Now, we calculate $(\text{adj} A)B$:
$(\text{adj} A)B = \begin{bmatrix}6&-3\\-2&1\end{bmatrix} \begin{bmatrix}5\\8\end{bmatrix}$
Perform the matrix multiplication:
$(\text{adj} A)B = \begin{bmatrix}(6)(5) + (-3)(8)\\(-2)(5) + (1)(8)\end{bmatrix}$
$(\text{adj} A)B = \begin{bmatrix}30 - 24\\-10 + 8\end{bmatrix}$
$(\text{adj} A)B = \begin{bmatrix}6\\-2\end{bmatrix}$
The zero matrix of the same order is $O = \begin{bmatrix}0\\0\end{bmatrix}$.
We observe that $(\text{adj} A)B = \begin{bmatrix}6\\-2\end{bmatrix} \neq \begin{bmatrix}0\\0\end{bmatrix} = O$.
For a system of linear equations $AX = B$, if $\det(A) = 0$, then:
If $(\text{adj} A)B \neq O$, the system is inconsistent (has no solution).
If $(\text{adj} A)B = O$, the system is consistent (has infinitely many solutions).
In this case, since $\det(A) = 0$ and $(\text{adj} A)B \neq O$, the system is inconsistent.
Conclusion:
Since the determinant of the coefficient matrix is zero and $(\text{adj} A)B$ is not the zero matrix, the given system of equations is inconsistent.
Question 4.
x + y + z = 1
2x + 3y + 2z = 2
ax + ay + 2az = 4
Answer:
Given:
The system of linear equations:
$x + y + z = 1$
$2x + 3y + 2z = 2$
$ax + ay + 2az = 4$
To Examine:
The consistency of the given system of equations.
Solution:
We can write the given system of equations in the matrix form $AX = B$, where:
$A = \begin{bmatrix}1&1&1\\2&3&2\\a&a&2a\end{bmatrix}$, $X = \begin{bmatrix}x\\y\\z\end{bmatrix}$, and $B = \begin{bmatrix}1\\2\\4\end{bmatrix}$
To determine the consistency of the system, we first calculate the determinant of the coefficient matrix $A$.
$\det(A) = 1 \begin{vmatrix}3&2\\a&2a\end{vmatrix} - 1 \begin{vmatrix}2&2\\a&2a\end{vmatrix} + 1 \begin{vmatrix}2&3\\a&a\end{vmatrix}$
$\det(A) = 1((3)(2a) - (2)(a)) - 1((2)(2a) - (2)(a)) + 1((2)(a) $$ - (3)(a))$
$\det(A) = 1(6a - 2a) - 1(4a - 2a) + 1(2a - 3a)$
$\det(A) = 4a - 2a - a = a$
The consistency of the system depends on the value of $\det(A)$.
Case 1: $\det(A) \neq 0$
If $\det(A) \neq 0$, which means $a \neq 0$, then the matrix $A$ is nonsingular. In this case, the inverse $A^{-1}$ exists, and the system $AX = B$ has a unique solution given by $X = A^{-1}B$.
Thus, if $a \neq 0$, the system is consistent.
Case 2: $\det(A) = 0$
If $\det(A) = 0$, which means $a = 0$, then the matrix $A$ is singular. In this case, we need to examine the product $(\text{adj} A)B$.
If $a = 0$, the matrix $A$ becomes $A = \begin{bmatrix}1&1&1\\2&3&2\\0&0&0\end{bmatrix}$.
First, find the adjoint of $A$ when $a=0$. The cofactors are:
$C_{11} = \begin{vmatrix}3&2\\0&0\end{vmatrix} = 0$
$C_{12} = -\begin{vmatrix}2&2\\0&0\end{vmatrix} = 0$
$C_{13} = \begin{vmatrix}2&3\\0&0\end{vmatrix} = 0$
$C_{21} = -\begin{vmatrix}1&1\\0&0\end{vmatrix} = 0$
$C_{22} = \begin{vmatrix}1&1\\0&0\end{vmatrix} = 0$
$C_{23} = -\begin{vmatrix}1&1\\0&0\end{vmatrix} = 0$
$C_{31} = \begin{vmatrix}1&1\\3&2\end{vmatrix} = 1(2)-1(3) = -1$
$C_{32} = -\begin{vmatrix}1&1\\2&2\end{vmatrix} = -(1(2)-1(2)) = 0$
$C_{33} = \begin{vmatrix}1&1\\2&3\end{vmatrix} = 1(3)-1(2) = 1$
The matrix of cofactors is $\begin{bmatrix}0&0&0\\0&0&0\\-1&0&1\end{bmatrix}$.
The adjoint of $A$ is the transpose of this matrix: $\text{adj}(A) = \begin{bmatrix}0&0&-1\\0&0&0\\0&0&1\end{bmatrix}$.
Now calculate $(\text{adj} A)B$:
$(\text{adj} A)B = \begin{bmatrix}0&0&-1\\0&0&0\\0&0&1\end{bmatrix} \begin{bmatrix}1\\2\\4\end{bmatrix}$
$(\text{adj} A)B = \begin{bmatrix} (0)(1) + (0)(2) + (-1)(4) \\ (0)(1) + (0)(2) + (0)(4) \\ (0)(1) + (0)(2) + (1)(4) \end{bmatrix} = \begin{bmatrix} 0 + 0 - 4 \\ 0 + 0 + 0 \\ 0 + 0 + 4 \end{bmatrix} = \begin{bmatrix} -4 \\ 0 \\ 4 \end{bmatrix}$
Since $(\text{adj} A)B = \begin{bmatrix}-4\\0\\4\end{bmatrix} \neq \begin{bmatrix}0\\0\\0\end{bmatrix} = O$ when $a=0$, the system is inconsistent (has no solution) when $a=0$.
Conclusion:
The system of equations is consistent if $a \neq 0$ and inconsistent if $a = 0$.
Question 5.
3x – y – 2z = 2
2y – z = –1
3x – 5y = 3
Answer:
Given:
The system of linear equations:
$3x - y - 2z = 2$
$2y - z = -1$
$3x - 5y = 3$
We can rewrite the equations to explicitly show all variables:
$3x - 1y - 2z = 2$
$0x + 2y - 1z = -1$
$3x - 5y + 0z = 3$
To Examine:
The consistency of the given system of equations.
Solution:
We write the given system of equations in the matrix form $AX = B$, where:
$A = \begin{bmatrix}3&-1&-2\\0&2&-1\\3&-5&0\end{bmatrix}$, $X = \begin{bmatrix}x\\y\\z\end{bmatrix}$, and $B = \begin{bmatrix}2\\-1\\3\end{bmatrix}$
To determine the consistency of the system, we first calculate the determinant of the coefficient matrix $A$.
$\det(A) = 3 \begin{vmatrix}2&-1\\-5&0\end{vmatrix} - (-1) \begin{vmatrix}0&-1\\3&0\end{vmatrix} + (-2) \begin{vmatrix}0&2\\3&-5\end{vmatrix}$
$\det(A) = 3((2)(0) - (-1)(-5)) + 1((0)(0) - (-1)(3)) - 2((0)(-5) $$ - (2)(3))$
$\det(A) = 3(0 - 5) + 1(0 + 3) - 2(0 - 6)$
$\det(A) = 3(-5) + 1(3) - 2(-6)$
$\det(A) = -15 + 3 + 12$
$\det(A) = -15 + 15 = 0$
Since $\det(A) = 0$, the matrix $A$ is a singular matrix. The system is either inconsistent or consistent with infinitely many solutions. To differentiate, we need to calculate $(\text{adj} A)B$.
First, we find the adjoint of $A$, $\text{adj}(A)$. This is the transpose of the matrix of cofactors of $A$.
The cofactors $C_{ij}$ are:
$C_{11} = +\begin{vmatrix}2&-1\\-5&0\end{vmatrix} = (2)(0) - (-1)(-5) = 0 - 5 = -5$
$C_{12} = -\begin{vmatrix}0&-1\\3&0\end{vmatrix} = -((0)(0) - (-1)(3)) = -(0 + 3) = -3$
$C_{13} = +\begin{vmatrix}0&2\\3&-5\end{vmatrix} = (0)(-5) - (2)(3) = 0 - 6 = -6$
$C_{21} = -\begin{vmatrix}-1&-2\\-5&0\end{vmatrix} = -((-1)(0) - (-2)(-5)) = -(0 - 10) = 10$
$C_{22} = +\begin{vmatrix}3&-2\\3&0\end{vmatrix} = (3)(0) - (-2)(3) = 0 - (-6) = 6$
$C_{23} = -\begin{vmatrix}3&-1\\3&-5\end{vmatrix} = -((3)(-5) - (-1)(3)) = -(-15 + 3) = 12$
$C_{31} = +\begin{vmatrix}-1&-2\\2&-1\end{vmatrix} = (-1)(-1) - (-2)(2) = 1 - (-4) = 5$
$C_{32} = -\begin{vmatrix}3&-2\\0&-1\end{vmatrix} = -((3)(-1) - (-2)(0)) = -(-3 - 0) = 3$
$C_{33} = +\begin{vmatrix}3&-1\\0&2\end{vmatrix} = (3)(2) - (-1)(0) = 6 - 0 = 6$
The matrix of cofactors is $\begin{bmatrix}-5&-3&-6\\10&6&12\\5&3&6\end{bmatrix}$.
The adjoint of $A$ is the transpose of the cofactor matrix:
$\text{adj}(A) = \begin{bmatrix}-5&10&5\\-3&6&3\\-6&12&6\end{bmatrix}$
Now, we calculate $(\text{adj} A)B$:
$(\text{adj} A)B = \begin{bmatrix}-5&10&5\\-3&6&3\\-6&12&6\end{bmatrix} \begin{bmatrix}2\\-1\\3\end{bmatrix}$
$(\text{adj} A)B = \begin{bmatrix}(-5)(2) + (10)(-1) + (5)(3)\\(-3)(2) + (6)(-1) + (3)(3)\\(-6)(2) + (12)(-1) + (6)(3)\end{bmatrix}$
$(\text{adj} A)B = \begin{bmatrix}-10 - 10 + 15\\-6 - 6 + 9\\-12 - 12 + 18\end{bmatrix}$
$(\text{adj} A)B = \begin{bmatrix}-20 + 15\\-12 + 9\\-24 + 18\end{bmatrix} = \begin{bmatrix}-5\\-3\\-6\end{bmatrix}$
The zero matrix of order $3 \times 1$ is $O = \begin{bmatrix}0\\0\\0\end{bmatrix}$.
We see that $(\text{adj} A)B = \begin{bmatrix}-5\\-3\\-6\end{bmatrix} \neq \begin{bmatrix}0\\0\\0\end{bmatrix} = O$.
For a system of linear equations $AX = B$ with $\det(A) = 0$:
If $(\text{adj} A)B \neq O$, the system is inconsistent (has no solution).
If $(\text{adj} A)B = O$, the system is consistent (has infinitely many solutions).
In this case, since $\det(A) = 0$ and $(\text{adj} A)B \neq O$, the system is inconsistent.
Conclusion:
Since the determinant of the coefficient matrix is zero and $(\text{adj} A)B$ is not the zero matrix, the given system of equations is inconsistent.
Question 6.
5x – y + 4z = 5
2x + 3y + 5z = 2
5x – 2y + 6z = –1
Answer:
Given:
The system of linear equations:
$5x - y + 4z = 5$
$2x + 3y + 5z = 2$
$5x - 2y + 6z = -1$
To Examine:
The consistency of the given system of equations.
Solution:
We can write the given system of equations in the matrix form $AX = B$, where:
$A = \begin{bmatrix}5&-1&4\\2&3&5\\5&-2&6\end{bmatrix}$, $X = \begin{bmatrix}x\\y\\z\end{bmatrix}$, and $B = \begin{bmatrix}5\\2\\-1\end{bmatrix}$
To determine the consistency of the system, we first calculate the determinant of the coefficient matrix $A$.
$\det(A) = 5 \begin{vmatrix}3&5\\-2&6\end{vmatrix} - (-1) \begin{vmatrix}2&5\\5&6\end{vmatrix} + 4 \begin{vmatrix}2&3\\5&-2\end{vmatrix}$
$\det(A) = 5((3)(6) - (5)(-2)) + 1((2)(6) - (5)(5)) + 4((2)(-2) $$ - (3)(5))$
$\det(A) = 5(18 + 10) + 1(12 - 25) + 4(-4 - 15)$
$\det(A) = 5(28) + 1(-13) + 4(-19)$
$\det(A) = 140 - 13 - 76$
$\det(A) = 140 - 89$
$\det(A) = 51$
Since $\det(A) = 51 \neq 0$, the matrix $A$ is a nonsingular matrix.
For a system of linear equations $AX = B$, if $\det(A) \neq 0$, then the matrix $A$ is invertible ($A^{-1}$ exists), and the system is consistent and has a unique solution given by $X = A^{-1}B$.
Conclusion:
Since the determinant of the coefficient matrix is non-zero, the given system of equations is consistent.
Solve system of linear equations, using matrix method, in Exercises 7 to 14.
Question 7.
5x + 2y = 4
7x + 3y = 5
Answer:
Given:
The system of linear equations:
$5x + 2y = 4$
$7x + 3y = 5$
Solution:
We can write the given system of equations in the matrix form $AX = B$, where:
$A = \begin{bmatrix}5&2\\7&3\end{bmatrix}$, $X = \begin{bmatrix}x\\y\end{bmatrix}$, and $B = \begin{bmatrix}4\\5\end{bmatrix}$
To solve for $X$ using the matrix method, we need to find the inverse of matrix $A$, i.e., $A^{-1}$. The solution is given by $X = A^{-1}B$. First, we calculate the determinant of $A$ to check if it is invertible.
$\det(A) = (5)(3) - (2)(7)$
$\det(A) = 15 - 14 = 1$
Since $\det(A) = 1 \neq 0$, the matrix $A$ is nonsingular, and its inverse $A^{-1}$ exists. The system is consistent and has a unique solution.
Now, we find the inverse of matrix $A$. For a $2 \times 2$ matrix $\begin{bmatrix}a&b\\c&d\end{bmatrix}$, the inverse is $\frac{1}{ad-bc}\begin{bmatrix}d&-b\\-c&a\end{bmatrix}$.
$A^{-1} = \frac{1}{\det(A)} \text{adj}(A)$
$A^{-1} = \frac{1}{1} \begin{bmatrix}3&-2\\-7&5\end{bmatrix} = \begin{bmatrix}3&-2\\-7&5\end{bmatrix}$
Now, we calculate $X = A^{-1}B$:
$X = \begin{bmatrix}x\\y\end{bmatrix} = \begin{bmatrix}3&-2\\-7&5\end{bmatrix} \begin{bmatrix}4\\5\end{bmatrix}$
Perform the matrix multiplication:
$\begin{bmatrix}x\\y\end{bmatrix} = \begin{bmatrix}(3)(4) + (-2)(5)\\(-7)(4) + (5)(5)\end{bmatrix}$
$\begin{bmatrix}x\\y\end{bmatrix} = \begin{bmatrix}12 - 10\\-28 + 25\end{bmatrix}$
$\begin{bmatrix}x\\y\end{bmatrix} = \begin{bmatrix}2\\-3\end{bmatrix}$
Equating the corresponding elements, we get the values of $x$ and $y$.
Conclusion:
The solution to the given system of equations is $x = 2$ and $y = -3$.
Question 8.
2x – y = –2
3x + 4y = 3
Answer:
Given:
The system of linear equations:
$2x - y = -2$
$3x + 4y = 3$
Solution:
We can write the given system of equations in the matrix form $AX = B$, where:
$A = \begin{bmatrix}2&-1\\3&4\end{bmatrix}$, $X = \begin{bmatrix}x\\y\end{bmatrix}$, and $B = \begin{bmatrix}-2\\3\end{bmatrix}$
To solve for $X$ using the matrix method, we need to find the inverse of matrix $A$, i.e., $A^{-1}$. The solution is given by $X = A^{-1}B$. First, we calculate the determinant of $A$ to check if it is invertible.
$\det(A) = (2)(4) - (-1)(3)$
$\det(A) = 8 - (-3)$
$\det(A) = 8 + 3 = 11$
Since $\det(A) = 11 \neq 0$, the matrix $A$ is nonsingular, and its inverse $A^{-1}$ exists. The system is consistent and has a unique solution.
Now, we find the inverse of matrix $A$. For a $2 \times 2$ matrix $\begin{bmatrix}a&b\\c&d\end{bmatrix}$, the inverse is $\frac{1}{ad-bc}\begin{bmatrix}d&-b\\-c&a\end{bmatrix}$.
$A^{-1} = \frac{1}{\det(A)} \text{adj}(A)$
$A^{-1} = \frac{1}{11} \begin{bmatrix}4&1\\-3&2\end{bmatrix}$
Now, we calculate $X = A^{-1}B$:
$X = \begin{bmatrix}x\\y\end{bmatrix} = \frac{1}{11} \begin{bmatrix}4&1\\-3&2\end{bmatrix} \begin{bmatrix}-2\\3\end{bmatrix}$
Perform the matrix multiplication:
$\begin{bmatrix}x\\y\end{bmatrix} = \frac{1}{11} \begin{bmatrix}(4)(-2) + (1)(3)\\(-3)(-2) + (2)(3)\end{bmatrix}$
$\begin{bmatrix}x\\y\end{bmatrix} = \frac{1}{11} \begin{bmatrix}-8 + 3\\6 + 6\end{bmatrix}$
$\begin{bmatrix}x\\y\end{bmatrix} = \frac{1}{11} \begin{bmatrix}-5\\12\end{bmatrix}$
Multiply the scalar $\frac{1}{11}$ by each element in the matrix:
$\begin{bmatrix}x\\y\end{bmatrix} = \begin{bmatrix}\frac{-5}{11}\\\frac{12}{11}\end{bmatrix}$
Equating the corresponding elements, we get the values of $x$ and $y$.
Conclusion:
The solution to the given system of equations is $x = -\frac{5}{11}$ and $y = \frac{12}{11}$.
Question 9.
4x – 3y = 3
3x – 5y = 7
Answer:
Given:
The system of linear equations:
$4x - 3y = 3$
$3x - 5y = 7$
Solution:
We can write the given system of equations in the matrix form $AX = B$, where:
$A = \begin{bmatrix}4&-3\\3&-5\end{bmatrix}$, $X = \begin{bmatrix}x\\y\end{bmatrix}$, and $B = \begin{bmatrix}3\\7\end{bmatrix}$
To solve for $X$ using the matrix method, we need to find the inverse of matrix $A$, i.e., $A^{-1}$. The solution is given by $X = A^{-1}B$. First, we calculate the determinant of $A$ to check if it is invertible.
$\det(A) = (4)(-5) - (-3)(3)$
$\det(A) = -20 - (-9)$
$\det(A) = -20 + 9 = -11$
Since $\det(A) = -11 \neq 0$, the matrix $A$ is nonsingular, and its inverse $A^{-1}$ exists. The system is consistent and has a unique solution.
Now, we find the inverse of matrix $A$. For a $2 \times 2$ matrix $\begin{bmatrix}a&b\\c&d\end{bmatrix}$, the inverse is $\frac{1}{ad-bc}\begin{bmatrix}d&-b\\-c&a\end{bmatrix}$.
$A^{-1} = \frac{1}{\det(A)} \text{adj}(A)$
$A^{-1} = \frac{1}{-11} \begin{bmatrix}-5&3\\-3&4\end{bmatrix}$
Now, we calculate $X = A^{-1}B$:
$X = \begin{bmatrix}x\\y\end{bmatrix} = \frac{1}{-11} \begin{bmatrix}-5&3\\-3&4\end{bmatrix} \begin{bmatrix}3\\7\end{bmatrix}$
Perform the matrix multiplication:
$\begin{bmatrix}x\\y\end{bmatrix} = -\frac{1}{11} \begin{bmatrix}(-5)(3) + (3)(7)\\(-3)(3) + (4)(7)\end{bmatrix}$
$\begin{bmatrix}x\\y\end{bmatrix} = -\frac{1}{11} \begin{bmatrix}-15 + 21\\-9 + 28\end{bmatrix}$
$\begin{bmatrix}x\\y\end{bmatrix} = -\frac{1}{11} \begin{bmatrix}6\\19\end{bmatrix}$
Multiply the scalar $-\frac{1}{11}$ by each element in the matrix:
$\begin{bmatrix}x\\y\end{bmatrix} = \begin{bmatrix}(-\frac{1}{11})(6)\\(-\frac{1}{11})(19)\end{bmatrix}$
$\begin{bmatrix}x\\y\end{bmatrix} = \begin{bmatrix}-\frac{6}{11}\\-\frac{19}{11}\end{bmatrix}$
Equating the corresponding elements, we get the values of $x$ and $y$.
Conclusion:
The solution to the given system of equations is $x = -\frac{6}{11}$ and $y = -\frac{19}{11}$.
Question 10.
5x + 2y = 3
3x + 2y = 5
Answer:
Given:
The system of linear equations:
$5x + 2y = 3$
$3x + 2y = 5$
Solution:
We can write the given system of equations in the matrix form $AX = B$, where:
$A = \begin{bmatrix}5&2\\3&2\end{bmatrix}$, $X = \begin{bmatrix}x\\y\end{bmatrix}$, and $B = \begin{bmatrix}3\\5\end{bmatrix}$
To solve for $X$ using the matrix method, we need to find the inverse of matrix $A$, i.e., $A^{-1}$. The solution is given by $X = A^{-1}B$. First, we calculate the determinant of $A$ to check if it is invertible.
$\det(A) = (5)(2) - (2)(3)$
$\det(A) = 10 - 6 = 4$
Since $\det(A) = 4 \neq 0$, the matrix $A$ is nonsingular, and its inverse $A^{-1}$ exists. The system is consistent and has a unique solution.
Now, we find the inverse of matrix $A$. For a $2 \times 2$ matrix $\begin{bmatrix}a&b\\c&d\end{bmatrix}$, the inverse is $\frac{1}{ad-bc}\begin{bmatrix}d&-b\\-c&a\end{bmatrix}$.
$A^{-1} = \frac{1}{\det(A)} \text{adj}(A)$
$A^{-1} = \frac{1}{4} \begin{bmatrix}2&-2\\-3&5\end{bmatrix}$
Now, we calculate $X = A^{-1}B$:
$X = \begin{bmatrix}x\\y\end{bmatrix} = \frac{1}{4} \begin{bmatrix}2&-2\\-3&5\end{bmatrix} \begin{bmatrix}3\\5\end{bmatrix}$
Perform the matrix multiplication:
$\begin{bmatrix}x\\y\end{bmatrix} = \frac{1}{4} \begin{bmatrix}(2)(3) + (-2)(5)\\(-3)(3) + (5)(5)\end{bmatrix}$
$\begin{bmatrix}x\\y\end{bmatrix} = \frac{1}{4} \begin{bmatrix}6 - 10\\-9 + 25\end{bmatrix}$
$\begin{bmatrix}x\\y\end{bmatrix} = \frac{1}{4} \begin{bmatrix}-4\\16\end{bmatrix}$
Multiply the scalar $\frac{1}{4}$ by each element in the matrix:
$\begin{bmatrix}x\\y\end{bmatrix} = \begin{bmatrix}\frac{-4}{4}\\\frac{16}{4}\end{bmatrix}$
$\begin{bmatrix}x\\y\end{bmatrix} = \begin{bmatrix}-1\\4\end{bmatrix}$
Equating the corresponding elements, we get the values of $x$ and $y$.
Conclusion:
The solution to the given system of equations is $x = -1$ and $y = 4$.
Question 11.
2x + y + z = 1
x – 2y – z = $\frac{3}{2}$
3y – 5z = 9
Answer:
Given:
The system of linear equations:
$2x + y + z = 1$
$x - 2y - z = \frac{3}{2}$
$3y - 5z = 9$
We can write the equations with explicit coefficients for all variables:
$2x + 1y + 1z = 1$
$1x - 2y - 1z = \frac{3}{2}$
$0x + 3y - 5z = 9$
Solution:
We can write the given system of equations in the matrix form $AX = B$, where:
$A = \begin{bmatrix}2&1&1\\1&-2&-1\\0&3&-5\end{bmatrix}$, $X = \begin{bmatrix}x\\y\\z\end{bmatrix}$, and $B = \begin{bmatrix}1\\\frac{3}{2}\\9\end{bmatrix}$
To solve for $X$ using the matrix method, we need to find the inverse of matrix $A$, i.e., $A^{-1}$. The solution is given by $X = A^{-1}B$. First, we calculate the determinant of $A$ to check if it is invertible.
$\det(A) = 2 \begin{vmatrix}-2&-1\\3&-5\end{vmatrix} - 1 \begin{vmatrix}1&-1\\0&-5\end{vmatrix} + 1 \begin{vmatrix}1&-2\\0&3\end{vmatrix}$
$\det(A) = 2((-2)(-5) - (-1)(3)) - 1((1)(-5) - (-1)(0)) $$ + 1((1)(3) - (-2)(0))$
$\det(A) = 2(10 + 3) - 1(-5 - 0) + 1(3 - 0)$
$\det(A) = 2(13) - 1(-5) + 1(3)$
$\det(A) = 26 + 5 + 3 = 34$
Since $\det(A) = 34 \neq 0$, the matrix $A$ is nonsingular, and its inverse $A^{-1}$ exists. The system is consistent and has a unique solution.
Next, we find the adjoint of $A$, $\text{adj}(A)$. This is the transpose of the matrix of cofactors of $A$.
The cofactors $C_{ij}$ are:
$C_{11} = +\begin{vmatrix}-2&-1\\3&-5\end{vmatrix} = (-2)(-5) - (-1)(3) = 10 + 3 = 13$
$C_{12} = -\begin{vmatrix}1&-1\\0&-5\end{vmatrix} = -((1)(-5) - (-1)(0)) = -(-5 - 0) = 5$
$C_{13} = +\begin{vmatrix}1&-2\\0&3\end{vmatrix} = (1)(3) - (-2)(0) = 3 - 0 = 3$
$C_{21} = -\begin{vmatrix}1&1\\3&-5\end{vmatrix} = -((1)(-5) - (1)(3)) = -(-5 - 3) = 8$
$C_{22} = +\begin{vmatrix}2&1\\0&-5\end{vmatrix} = (2)(-5) - (1)(0) = -10 - 0 = -10$
$C_{23} = -\begin{vmatrix}2&1\\0&3\end{vmatrix} = -((2)(3) - (1)(0)) = -(6 - 0) = -6$
$C_{31} = +\begin{vmatrix}1&1\\-2&-1\end{vmatrix} = (1)(-1) - (1)(-2) = -1 + 2 = 1$
$C_{32} = -\begin{vmatrix}2&1\\1&-1\end{vmatrix} = -((2)(-1) - (1)(1)) = -(-2 - 1) = 3$
$C_{33} = +\begin{vmatrix}2&1\\1&-2\end{vmatrix} = (2)(-2) - (1)(1) = -4 - 1 = -5$
The matrix of cofactors is $\begin{bmatrix}13&5&3\\8&-10&-6\\1&3&-5\end{bmatrix}$.
The adjoint of $A$ is the transpose of the cofactor matrix:
$\text{adj}(A) = \begin{bmatrix}13&8&1\\5&-10&3\\3&-6&-5\end{bmatrix}$
Now, we calculate the inverse of $A$:
$A^{-1} = \frac{1}{\det(A)} \text{adj}(A) = \frac{1}{34} \begin{bmatrix}13&8&1\\5&-10&3\\3&-6&-5\end{bmatrix}$
Finally, we solve for $X$ using $X = A^{-1}B$:
$X = \begin{bmatrix}x\\y\\z\end{bmatrix} = \frac{1}{34} \begin{bmatrix}13&8&1\\5&-10&3\\3&-6&-5\end{bmatrix} \begin{bmatrix}1\\\frac{3}{2}\\9\end{bmatrix}$
Perform the matrix multiplication:
$X = \frac{1}{34} \begin{bmatrix} (13)(1) + (8)(\frac{3}{2}) + (1)(9) \\ (5)(1) + (-10)(\frac{3}{2}) + (3)(9) \\ (3)(1) + (-6)(\frac{3}{2}) + (-5)(9) \end{bmatrix}$
$X = \frac{1}{34} \begin{bmatrix} 13 + 12 + 9 \\ 5 - 15 + 27 \\ 3 - 9 - 45 \end{bmatrix}$
$X = \frac{1}{34} \begin{bmatrix} 34 \\ 17 \\ -51 \end{bmatrix}$
Multiply by the scalar $\frac{1}{34}$:
$X = \begin{bmatrix} \frac{34}{34} \\ \frac{17}{34} \\ \frac{-51}{34} \end{bmatrix} = \begin{bmatrix} 1 \\ \frac{1}{2} \\ -\frac{3}{2} \end{bmatrix}$
Equating the elements, we find $x=1$, $y=\frac{1}{2}$, and $z=-\frac{3}{2}$.
Conclusion:
The solution to the given system of equations is $x = 1$, $y = \frac{1}{2}$, and $z = -\frac{3}{2}$.
Question 12.
x – y + z = 4
2x + y – 3z = 0
x + y + z = 2
Answer:
Given:
The system of linear equations:
$x - y + z = 4$
$2x + y - 3z = 0$
$x + y + z = 2$
Solution:
We can write the given system of equations in the matrix form $AX = B$, where:
$A = \begin{bmatrix}1&-1&1\\2&1&-3\\1&1&1\end{bmatrix}$, $X = \begin{bmatrix}x\\y\\z\end{bmatrix}$, and $B = \begin{bmatrix}4\\0\\2\end{bmatrix}$
To solve for $X$ using the matrix method, we need to find the inverse of matrix $A$, i.e., $A^{-1}$. The solution is given by $X = A^{-1}B$. First, we calculate the determinant of $A$ to check if it is invertible.
$\det(A) = 1 \begin{vmatrix}1&-3\\1&1\end{vmatrix} - (-1) \begin{vmatrix}2&-3\\1&1\end{vmatrix} + 1 \begin{vmatrix}2&1\\1&1\end{vmatrix}$
$\det(A) = 1((1)(1) - (-3)(1)) + 1((2)(1) - (-3)(1)) + 1((2)(1) $$ - (1)(1))$
$\det(A) = 1(1 + 3) + 1(2 + 3) + 1(2 - 1)$
$\det(A) = 4 + 5 + 1 = 10$
Since $\det(A) = 10 \neq 0$, the matrix $A$ is nonsingular, and its inverse $A^{-1}$ exists. The system is consistent and has a unique solution.
Next, we find the adjoint of $A$, $\text{adj}(A)$. This is the transpose of the matrix of cofactors of $A$.
The cofactors $C_{ij}$ are:
$C_{11} = +\begin{vmatrix}1&-3\\1&1\end{vmatrix} = (1)(1) - (-3)(1) = 1 + 3 = 4$
$C_{12} = -\begin{vmatrix}2&-3\\1&1\end{vmatrix} = -((2)(1) - (-3)(1)) = -(2 + 3) = -5$
$C_{13} = +\begin{vmatrix}2&1\\1&1\end{vmatrix} = (2)(1) - (1)(1) = 2 - 1 = 1$
$C_{21} = -\begin{vmatrix}-1&1\\1&1\end{vmatrix} = -((-1)(1) - (1)(1)) = -(-1 - 1) = 2$
$C_{22} = +\begin{vmatrix}1&1\\1&1\end{vmatrix} = (1)(1) - (1)(1) = 1 - 1 = 0$
$C_{23} = -\begin{vmatrix}1&-1\\1&1\end{vmatrix} = -((1)(1) - (-1)(1)) = -(1 + 1) = -2$
$C_{31} = +\begin{vmatrix}-1&1\\1&-3\end{vmatrix} = (-1)(-3) - (1)(1) = 3 - 1 = 2$
$C_{32} = -\begin{vmatrix}1&1\\2&-3\end{vmatrix} = -((1)(-3) - (1)(2)) = -(-3 - 2) = 5$
$C_{33} = +\begin{vmatrix}1&-1\\2&1\end{vmatrix} = (1)(1) - (-1)(2) = 1 + 2 = 3$
The matrix of cofactors is $\begin{bmatrix}4&-5&1\\2&0&-2\\2&5&3\end{bmatrix}$.
The adjoint of $A$ is the transpose of the cofactor matrix:
$\text{adj}(A) = \begin{bmatrix}4&2&2\\-5&0&5\\1&-2&3\end{bmatrix}$
Now, we calculate the inverse of $A$:
$A^{-1} = \frac{1}{\det(A)} \text{adj}(A) = \frac{1}{10} \begin{bmatrix}4&2&2\\-5&0&5\\1&-2&3\end{bmatrix}$
Finally, we solve for $X$ using $X = A^{-1}B$:
$X = \begin{bmatrix}x\\y\\z\end{bmatrix} = \frac{1}{10} \begin{bmatrix}4&2&2\\-5&0&5\\1&-2&3\end{bmatrix} \begin{bmatrix}4\\0\\2\end{bmatrix}$
Perform the matrix multiplication:
$X = \frac{1}{10} \begin{bmatrix} (4)(4) + (2)(0) + (2)(2) \\ (-5)(4) + (0)(0) + (5)(2) \\ (1)(4) + (-2)(0) + (3)(2) \end{bmatrix}$
$X = \frac{1}{10} \begin{bmatrix} 16 + 0 + 4 \\ -20 + 0 + 10 \\ 4 + 0 + 6 \end{bmatrix}$
$X = \frac{1}{10} \begin{bmatrix} 20 \\ -10 \\ 10 \end{bmatrix}$
Multiply by the scalar $\frac{1}{10}$:
$X = \begin{bmatrix} \frac{20}{10} \\ \frac{-10}{10} \\ \frac{10}{10} \end{bmatrix} = \begin{bmatrix} 2 \\ -1 \\ 1 \end{bmatrix}$
Equating the elements, we find $x=2$, $y=-1$, and $z=1$.
Conclusion:
The solution to the given system of equations is $x = 2$, $y = -1$, and $z = 1$.
Question 13.
2x + 3y + 3z = 5
x – 2y + z = –4
3x – y – 2z = 3
Answer:
Given:
The system of linear equations:
$2x + 3y + 3z = 5$
$x - 2y + z = -4$
$3x - y - 2z = 3$
Solution:
We can write the given system of equations in the matrix form $AX = B$, where:
$A = \begin{bmatrix}2&3&3\\1&-2&1\\3&-1&-2\end{bmatrix}$, $X = \begin{bmatrix}x\\y\\z\end{bmatrix}$, and $B = \begin{bmatrix}5\\-4\\3\end{bmatrix}$
To solve for $X$ using the matrix method, we need to find the inverse of matrix $A$, i.e., $A^{-1}$. The solution is given by $X = A^{-1}B$. First, we calculate the determinant of $A$ to check if it is invertible.
$\det(A) = 2 \begin{vmatrix}-2&1\\-1&-2\end{vmatrix} - 3 \begin{vmatrix}1&1\\3&-2\end{vmatrix} + 3 \begin{vmatrix}1&-2\\3&-1\end{vmatrix}$
$\det(A) = 2((-2)(-2) - (1)(-1)) - 3((1)(-2) - (1)(3)) $$ + 3((1)(-1) - (-2)(3))$
$\det(A) = 2(4 + 1) - 3(-2 - 3) + 3(-1 + 6)$
$\det(A) = 2(5) - 3(-5) + 3(5)$
$\det(A) = 10 + 15 + 15 = 40$
Since $\det(A) = 40 \neq 0$, the matrix $A$ is nonsingular, and its inverse $A^{-1}$ exists. The system is consistent and has a unique solution.
Next, we find the adjoint of $A$, $\text{adj}(A)$. This is the transpose of the matrix of cofactors of $A$.
The cofactors $C_{ij}$ are:
$C_{11} = +\begin{vmatrix}-2&1\\-1&-2\end{vmatrix} = (-2)(-2) - (1)(-1) = 4+1 = 5$
$C_{12} = -\begin{vmatrix}1&1\\3&-2\end{vmatrix} = -((1)(-2) - (1)(3)) = -(-2-3) = 5$
$C_{13} = +\begin{vmatrix}1&-2\\3&-1\end{vmatrix} = (1)(-1) - (-2)(3) = -1+6 = 5$
$C_{21} = -\begin{vmatrix}3&3\\-1&-2\end{vmatrix} = -((3)(-2) - (3)(-1)) = -(-6+3) = 3$
$C_{22} = +\begin{vmatrix}2&3\\3&-2\end{vmatrix} = (2)(-2) - (3)(3) = -4-9 = -13$
$C_{23} = -\begin{vmatrix}2&3\\3&-1\end{vmatrix} = -((2)(-1) - (3)(3)) = -(-2-9) = 11$
$C_{31} = +\begin{vmatrix}3&3\\-2&1\end{vmatrix} = (3)(1) - (3)(-2) = 3+6 = 9$
$C_{32} = -\begin{vmatrix}2&3\\1&1\end{vmatrix} = -((2)(1) - (3)(1)) = -(2-3) = 1$
$C_{33} = +\begin{vmatrix}2&3\\1&-2\end{vmatrix} = (2)(-2) - (3)(1) = -4-3 = -7$
The matrix of cofactors is $\begin{bmatrix}5&5&5\\3&-13&11\\9&1&-7\end{bmatrix}$.
The adjoint of $A$ is the transpose of the cofactor matrix:
$\text{adj}(A) = \begin{bmatrix}5&3&9\\5&-13&1\\5&11&-7\end{bmatrix}$
Now, we calculate the inverse of $A$:
$A^{-1} = \frac{1}{\det(A)} \text{adj}(A) = \frac{1}{40} \begin{bmatrix}5&3&9\\5&-13&1\\5&11&-7\end{bmatrix}$
Finally, we solve for $X$ using $X = A^{-1}B$:
$X = \begin{bmatrix}x\\y\\z\end{bmatrix} = \frac{1}{40} \begin{bmatrix}5&3&9\\5&-13&1\\5&11&-7\end{bmatrix} \begin{bmatrix}5\\-4\\3\end{bmatrix}$
Perform the matrix multiplication:
$X = \frac{1}{40} \begin{bmatrix} (5)(5) + (3)(-4) + (9)(3) \\ (5)(5) + (-13)(-4) + (1)(3) \\ (5)(5) + (11)(-4) + (-7)(3) \end{bmatrix}$
$X = \frac{1}{40} \begin{bmatrix} 25 - 12 + 27 \\ 25 + 52 + 3 \\ 25 - 44 - 21 \end{bmatrix}$
$X = \frac{1}{40} \begin{bmatrix} 40 \\ 80 \\ -40 \end{bmatrix}$
Multiply by the scalar $\frac{1}{40}$:
$X = \begin{bmatrix} \frac{40}{40} \\ \frac{80}{40} \\ \frac{-40}{40} \end{bmatrix} = \begin{bmatrix} 1 \\ 2 \\ -1 \end{bmatrix}$
Equating the elements, we find $x=1$, $y=2$, and $z=-1$.
Conclusion:
The solution to the given system of equations is $x = 1$, $y = 2$, and $z = -1$.
Question 14.
x – y + 2z = 7
3x + 4y – 5z = – 5
2x – y + 3z = 12
Answer:
Given:
The system of linear equations:
$x - y + 2z = 7$
$3x + 4y - 5z = -5$
$2x - y + 3z = 12$
Solution:
We can write the given system of equations in the matrix form $AX = B$, where:
$A = \begin{bmatrix}1&-1&2\\3&4&-5\\2&-1&3\end{bmatrix}$, $X = \begin{bmatrix}x\\y\\z\end{bmatrix}$, and $B = \begin{bmatrix}7\\-5\\12\end{bmatrix}$
To solve for $X$ using the matrix method, we need to find the inverse of matrix $A$, i.e., $A^{-1}$. The solution is given by $X = A^{-1}B$. First, we calculate the determinant of $A$ to check if it is invertible.
$\det(A) = 1 \begin{vmatrix}4&-5\\-1&3\end{vmatrix} - (-1) \begin{vmatrix}3&-5\\2&3\end{vmatrix} + 2 \begin{vmatrix}3&4\\2&-1\end{vmatrix}$
$\det(A) = 1((4)(3) - (-5)(-1)) + 1((3)(3) - (-5)(2)) $$ + 2((3)(-1) $$ - (4)(2))$
$\det(A) = 1(12 - 5) + 1(9 - (-10)) + 2(-3 - 8)$
$\det(A) = 1(7) + 1(9 + 10) + 2(-11)$
$\det(A) = 7 + 19 - 22 = 4$
Since $\det(A) = 4 \neq 0$, the matrix $A$ is nonsingular, and its inverse $A^{-1}$ exists. The system is consistent and has a unique solution.
Next, we find the adjoint of $A$, $\text{adj}(A)$. This is the transpose of the matrix of cofactors of $A$.
The cofactors $C_{ij}$ are:
$C_{11} = +\begin{vmatrix}4&-5\\-1&3\end{vmatrix} = (4)(3) - (-5)(-1) = 12 - 5 = 7$
$C_{12} = -\begin{vmatrix}3&-5\\2&3\end{vmatrix} = -((3)(3) - (-5)(2)) = -(9 + 10) = -19$
$C_{13} = +\begin{vmatrix}3&4\\2&-1\end{vmatrix} = (3)(-1) - (4)(2) = -3 - 8 = -11$
$C_{21} = -\begin{vmatrix}-1&2\\-1&3\end{vmatrix} = -((-1)(3) - (2)(-1)) = -(-3 + 2) = 1$
$C_{22} = +\begin{vmatrix}1&2\\2&3\end{vmatrix} = (1)(3) - (2)(2) = 3 - 4 = -1$
$C_{23} = -\begin{vmatrix}1&-1\\2&-1\end{vmatrix} = -((1)(-1) - (-1)(2)) = -(-1 + 2) = -1$
$C_{31} = +\begin{vmatrix}-1&2\\4&-5\end{vmatrix} = (-1)(-5) - (2)(4) = 5 - 8 = -3$
$C_{32} = -\begin{vmatrix}1&2\\3&-5\end{vmatrix} = -((1)(-5) - (2)(3)) = -(-5 - 6) = 11$
$C_{33} = +\begin{vmatrix}1&-1\\3&4\end{vmatrix} = (1)(4) - (-1)(3) = 4 + 3 = 7$
The matrix of cofactors is $\begin{bmatrix}7&-19&-11\\1&-1&-1\\-3&11&7\end{bmatrix}$.
The adjoint of $A$ is the transpose of the cofactor matrix:
$\text{adj}(A) = \begin{bmatrix}7&1&-3\\-19&-1&11\\-11&-1&7\end{bmatrix}$
Now, we calculate the inverse of $A$:
$A^{-1} = \frac{1}{\det(A)} \text{adj}(A) = \frac{1}{4} \begin{bmatrix}7&1&-3\\-19&-1&11\\-11&-1&7\end{bmatrix}$
Finally, we solve for $X$ using $X = A^{-1}B$:
$X = \begin{bmatrix}x\\y\\z\end{bmatrix} = \frac{1}{4} \begin{bmatrix}7&1&-3\\-19&-1&11\\-11&-1&7\end{bmatrix} \begin{bmatrix}7\\-5\\12\end{bmatrix}$
Perform the matrix multiplication:
$X = \frac{1}{4} \begin{bmatrix} (7)(7) + (1)(-5) + (-3)(12) \\ (-19)(7) + (-1)(-5) + (11)(12) \\ (-11)(7) + (-1)(-5) + (7)(12) \end{bmatrix}$
$X = \frac{1}{4} \begin{bmatrix} 49 - 5 - 36 \\ -133 + 5 + 132 \\ -77 + 5 + 84 \end{bmatrix}$
$X = \frac{1}{4} \begin{bmatrix} 49 - 41 \\ -133 + 137 \\ -77 + 89 \end{bmatrix}$
$X = \frac{1}{4} \begin{bmatrix} 8 \\ 4 \\ 12 \end{bmatrix}$
Multiply by the scalar $\frac{1}{4}$:
$X = \begin{bmatrix} \frac{8}{4} \\ \frac{4}{4} \\ \frac{12}{4} \end{bmatrix} = \begin{bmatrix} 2 \\ 1 \\ 3 \end{bmatrix}$
Equating the elements, we find $x=2$, $y=1$, and $z=3$.
Conclusion:
The solution to the given system of equations is $x = 2$, $y = 1$, and $z = 3$.
Question 15. If A = $\begin{bmatrix}2&−3&5\\3&2&−4\\1&1&−2 \end{bmatrix}$ , find A–1. Using A–1 solve the system of equations
2x – 3y + 5z = 11
3x + 2y – 4z = – 5
x + y – 2z = – 3
Answer:
Given:
Matrix $A = \begin{bmatrix}2&−3&5\\3&2&−4\\1&1&−2 \end{bmatrix}$
System of equations:
$2x – 3y + 5z = 11$
$3x + 2y – 4z = – 5$
$x + y – 2z = – 3$
To Find:
The inverse of matrix A ($A^{-1}$).
The solution to the system of equations using $A^{-1}$.
Solution:
First, we find the inverse of matrix $A$. To do this, we need the determinant of $A$ and the adjoint of $A$.
Calculate $\det(A)$:
$\det(A) = 2 \begin{vmatrix}2&−4\\1&−2\end{vmatrix} - (−3) \begin{vmatrix}3&−4\\1&−2\end{vmatrix} + 5 \begin{vmatrix}3&2\\1&1\end{vmatrix}$
$\det(A) = 2((2)(−2) − (−4)(1)) + 3((3)(−2) − (−4)(1)) $$ + 5((3)(1) $$ − (2)(1))$
$\det(A) = 2(−4 + 4) + 3(−6 + 4) + 5(3 − 2)$
$\det(A) = 2(0) + 3(−2) + 5(1) = 0 - 6 + 5 = -1$
Since $\det(A) = -1 \neq 0$, $A$ is invertible, and $A^{-1}$ exists.
Next, find the matrix of cofactors of $A$:
$C_{11} = +\begin{vmatrix}2&-4\\1&-2\end{vmatrix} = 2(-2) - (-4)(1) = -4 + 4 = 0$
$C_{12} = -\begin{vmatrix}3&-4\\1&-2\end{vmatrix} = -[3(-2) - (-4)(1)] = -[-6 + 4] = 2$
$C_{13} = +\begin{vmatrix}3&2\\1&1\end{vmatrix} = 3(1) - 2(1) = 3 - 2 = 1$
$C_{21} = -\begin{vmatrix}-3&5\\1&-2\end{vmatrix} = -[(-3)(-2) - 5(1)] = -[6 - 5] = -1$
$C_{22} = +\begin{vmatrix}2&5\\1&-2\end{vmatrix} = 2(-2) - 5(1) = -4 - 5 = -9$
$C_{23} = -\begin{vmatrix}2&-3\\1&1\end{vmatrix} = -[2(1) - (-3)(1)] = -[2 + 3] = -5$
$C_{31} = +\begin{vmatrix}-3&5\\2&-4\end{vmatrix} = (-3)(-4) - 5(2) = 12 - 10 = 2$
$C_{32} = -\begin{vmatrix}2&5\\3&-4\end{vmatrix} = -[2(-4) - 5(3)] = -[-8 - 15] = 23$
$C_{33} = +\begin{vmatrix}2&-3\\3&2\end{vmatrix} = 2(2) - (-3)(3) = 4 + 9 = 13$
The matrix of cofactors is $\begin{bmatrix}0&2&1\\-1&-9&-5\\2&23&13\end{bmatrix}$.
The adjoint of $A$ is the transpose of the cofactor matrix:
$\text{adj}(A) = \begin{bmatrix}0&-1&2\\2&-9&23\\1&-5&13\end{bmatrix}$
Now, calculate $A^{-1}$:
$A^{-1} = \frac{1}{\det(A)} \text{adj}(A) = \frac{1}{-1} \begin{bmatrix}0&-1&2\\2&-9&23\\1&-5&13\end{bmatrix} = -1 \begin{bmatrix}0&-1&2\\2&-9&23\\1&-5&13\end{bmatrix}$
$A^{-1} = \begin{bmatrix}0&1&-2\\-2&9&-23\\-1&5&-13\end{bmatrix}$
Now, we use $A^{-1}$ to solve the system of equations. The system can be written in matrix form $AX = B$, where:
$A = \begin{bmatrix}2&−3&5\\3&2&−4\\1&1&−2 \end{bmatrix}$, $X = \begin{bmatrix}x\\y\\z\end{bmatrix}$, and $B = \begin{bmatrix}11\\-5\\-3\end{bmatrix}$
We have already found the inverse of $A$. The solution is given by $X = A^{-1}B$.
$X = \begin{bmatrix}x\\y\\z\end{bmatrix} = \begin{bmatrix}0&1&-2\\-2&9&-23\\-1&5&-13\end{bmatrix} \begin{bmatrix}11\\-5\\-3\end{bmatrix}$
Perform the matrix multiplication:
$X = \begin{bmatrix} (0)(11) + (1)(-5) + (-2)(-3) \\ (-2)(11) + (9)(-5) + (-23)(-3) \\ (-1)(11) + (5)(-5) + (-13)(-3) \end{bmatrix}$
$X = \begin{bmatrix} 0 - 5 + 6 \\ -22 - 45 + 69 \\ -11 - 25 + 39 \end{bmatrix}$
$X = \begin{bmatrix} 1 \\ -67 + 69 \\ -36 + 39 \end{bmatrix}$
$X = \begin{bmatrix}1\\2\\3\end{bmatrix}$
Equating the corresponding elements, we find the values of $x$, $y$, and $z$.
Conclusion:
The inverse of matrix A is $A^{-1} = \begin{bmatrix}0&1&-2\\-2&9&-23\\-1&5&-13\end{bmatrix}$.
The solution to the given system of equations is $x = 1$, $y = 2$, and $z = 3$.
Question 16. The cost of 4 kg onion, 3 kg wheat and 2 kg rice is ₹ 60. The cost of 2 kg onion, 4 kg wheat and 6 kg rice is ₹ 90. The cost of 6 kg onion 2 kg wheat and 3 kg rice is ₹ 70. Find cost of each item per kg by matrix method.
Answer:
Given:
The cost of 4 kg onion, 3 kg wheat and 2 kg rice is $\textsf{₹} 60$.
The cost of 2 kg onion, 4 kg wheat and 6 kg rice is $\textsf{₹} 90$.
The cost of 6 kg onion, 2 kg wheat and 3 kg rice is $\textsf{₹} 70$.
To Find:
The cost of each item per kg by matrix method.
Solution:
Let the cost per kg of onion, wheat, and rice be $x$, $y$, and $z$ respectively.
From the given information, we can form the following system of linear equations:
$4x + 3y + 2z = 60$
$2x + 4y + 6z = 90$
$6x + 2y + 3z = 70$
The second equation can be simplified by dividing by 2:
$x + 2y + 3z = 45$
The system of equations is:
$4x + 3y + 2z = 60$
$x + 2y + 3z = 45$
$6x + 2y + 3z = 70$
We can write this system in matrix form $AX = B$, where:
$A = \begin{pmatrix} 4 & 3 & 2 \\ 1 & 2 & 3 \\ 6 & 2 & 3 \end{pmatrix}$, $X = \begin{pmatrix} x \\ y \\ z \end{pmatrix}$, $B = \begin{pmatrix} 60 \\ 45 \\ 70 \end{pmatrix}$
So, the matrix equation is:
$\begin{pmatrix} 4 & 3 & 2 \\ 1 & 2 & 3 \\ 6 & 2 & 3 \end{pmatrix} \begin{pmatrix} x \\ y \\ z \end{pmatrix} = \begin{pmatrix} 60 \\ 45 \\ 70 \end{pmatrix}$
To solve for $X$, we need to find the inverse of matrix $A$, $A^{-1}$.
First, calculate the determinant of $A$, $\det(A)$.
$\det(A) = 4 \begin{vmatrix} 2 & 3 \\ 2 & 3 \end{vmatrix} - 3 \begin{vmatrix} 1 & 3 \\ 6 & 3 \end{vmatrix} + 2 \begin{vmatrix} 1 & 2 \\ 6 & 2 \end{vmatrix}$
$\det(A) = 4(2 \times 3 - 3 \times 2) - 3(1 \times 3 - 3 \times 6) + 2(1 \times 2 - 2 \times 6)$
$\det(A) = 4(6 - 6) - 3(3 - 18) + 2(2 - 12)$
$\det(A) = 4(0) - 3(-15) + 2(-10)$
$\det(A) = 0 + 45 - 20$
$\det(A) = 25$
Since $\det(A) \neq 0$, the inverse matrix $A^{-1}$ exists.
Next, find the cofactor matrix of $A$. The cofactors are:
$C_{11} = +(2 \times 3 - 3 \times 2) = 0$
$C_{12} = -(1 \times 3 - 3 \times 6) = -(3 - 18) = 15$
$C_{13} = +(1 \times 2 - 2 \times 6) = +(2 - 12) = -10$
$C_{21} = -(3 \times 3 - 2 \times 2) = -(9 - 4) = -5$
$C_{22} = +(4 \times 3 - 2 \times 6) = +(12 - 12) = 0$
$C_{23} = -(4 \times 2 - 3 \times 6) = -(8 - 18) = 10$
$C_{31} = +(3 \times 3 - 2 \times 2) = +(9 - 4) = 5$
$C_{32} = -(4 \times 3 - 2 \times 1) = -(12 - 2) = -10$
$C_{33} = +(4 \times 2 - 3 \times 1) = +(8 - 3) = 5$
The cofactor matrix $C$ is:
$C = \begin{pmatrix} 0 & 15 & -10 \\ -5 & 0 & 10 \\ 5 & -10 & 5 \end{pmatrix}$
The adjoint of $A$, $\text{adj}(A)$, is the transpose of the cofactor matrix:
$\text{adj}(A) = C^T = \begin{pmatrix} 0 & -5 & 5 \\ 15 & 0 & -10 \\ -10 & 10 & 5 \end{pmatrix}$
The inverse of $A$ is $A^{-1} = \frac{1}{\det(A)} \text{adj}(A)$.
$A^{-1} = \frac{1}{25} \begin{pmatrix} 0 & -5 & 5 \\ 15 & 0 & -10 \\ -10 & 10 & 5 \end{pmatrix}$
Now, we can find $X$ using $X = A^{-1}B$:
$X = \frac{1}{25} \begin{pmatrix} 0 & -5 & 5 \\ 15 & 0 & -10 \\ -10 & 10 & 5 \end{pmatrix} \begin{pmatrix} 60 \\ 45 \\ 70 \end{pmatrix}$
Perform the matrix multiplication:
$X = \frac{1}{25} \begin{pmatrix} (0 \times 60) + (-5 \times 45) + (5 \times 70) \\ (15 \times 60) + (0 \times 45) + (-10 \times 70) \\ (-10 \times 60) + (10 \times 45) + (5 \times 70) \end{pmatrix}$
$X = \frac{1}{25} \begin{pmatrix} 0 - 225 + 350 \\ 900 + 0 - 700 \\ -600 + 450 + 350 \end{pmatrix}$
$X = \frac{1}{25} \begin{pmatrix} 125 \\ 200 \\ 200 \end{pmatrix}$
$X = \begin{pmatrix} 125/25 \\ 200/25 \\ 200/25 \end{pmatrix} = \begin{pmatrix} 5 \\ 8 \\ 8 \end{pmatrix}$
So, $x = 5$, $y = 8$, and $z = 8$.
Therefore, the cost of each item per kg is:
Cost of onion per kg = $\textsf{₹}5$
Cost of wheat per kg = $\textsf{₹}8$
Cost of rice per kg = $\textsf{₹}8$
Example 30 to 34 - Miscellaneous Examples
Example 30: If a, b, c are positive and unequal, show that value of the determinant
$∆ = \begin{vmatrix} a&b&c\\b&c&a\\c&a&b \end{vmatrix}$ is negative.
Answer:
Given:
The determinant $∆ = \begin{vmatrix} a&b&c\\b&c&a\\c&a&b \end{vmatrix}$, where $a, b, c$ are positive and unequal.
To Show:
The value of the determinant $∆$ is negative.
Solution:
Let's calculate the determinant $∆$:
$∆ = a \begin{vmatrix} c&a\\a&b \end{vmatrix} - b \begin{vmatrix} b&a\\c&b \end{vmatrix} + c \begin{vmatrix} b&c\\c&a \end{vmatrix}$
$∆ = a(cb - a \times a) - b(b \times b - a \times c) + c(b \times a - c \times c)$
$∆ = a(bc - a^2) - b(b^2 - ac) + c(ab - c^2)$
$∆ = abc - a^3 - b^3 + abc + abc - c^3$
Combine like terms:
$∆ = 3abc - a^3 - b^3 - c^3$
Rearrange the terms and factor out $-1$:
$∆ = -(a^3 + b^3 + c^3 - 3abc)$
We use the algebraic identity: $a^3 + b^3 + c^3 - 3abc = (a+b+c)(a^2+b^2+c^2 - ab - bc - ca)$.
So, $∆ = -(a+b+c)(a^2+b^2+c^2 - ab - bc - ca)$
Now, consider the expression $a^2+b^2+c^2 - ab - bc - ca$. We can rewrite it as:
$a^2+b^2+c^2 - ab - bc - ca = \frac{1}{2}(2a^2+2b^2+2c^2 - 2ab - 2bc - 2ca)$
$= \frac{1}{2}[(a^2 - 2ab + b^2) + (b^2 - 2bc + c^2) + (c^2 - 2ca + a^2)]$
$= \frac{1}{2}[(a-b)^2 + (b-c)^2 + (c-a)^2]$
Substituting this back into the expression for $∆$:
$∆ = -(a+b+c) \frac{1}{2}[(a-b)^2 + (b-c)^2 + (c-a)^2]$
We are given that $a, b, c$ are positive. Therefore, $(a+b+c)$ is positive.
We are also given that $a, b, c$ are unequal. This means that at least two of $a-b$, $b-c$, or $c-a$ are non-zero. Consequently, $(a-b)^2$, $(b-c)^2$, and $(c-a)^2$ are non-negative, and at least one of them must be strictly positive (since $a, b, c$ are unequal, the case where all differences are zero, i.e., $a=b=c$, is excluded).
Therefore, $(a-b)^2 + (b-c)^2 + (c-a)^2 > 0$.
This implies that $\frac{1}{2}[(a-b)^2 + (b-c)^2 + (c-a)^2]$ is positive.
So, $∆ = -( \text{positive term} ) \times ( \text{positive term} )$
$∆ = -(\text{a positive value})$
Thus, the value of the determinant $∆$ is negative.
Example 31: If a, b, c, are in A.P, find value of
$\begin{vmatrix} 2y+4&5y+7&8y+a\\3y+5&6y+8&9y+b\\4y+6&7y+9&10y+c \end{vmatrix}$
Answer:
Given:
The determinant $∆ = \begin{vmatrix} 2y+4&5y+7&8y+a\\3y+5&6y+8&9y+b\\4y+6&7y+9&10y+c \end{vmatrix}$.
$a, b, c$ are in A.P.
To Find:
The value of the determinant.
Solution:
Let the given determinant be $D$.
$D = \begin{vmatrix} 2y+4&5y+7&8y+a\\3y+5&6y+8&9y+b\\4y+6&7y+9&10y+c \end{vmatrix}$
Since $a, b, c$ are in A.P., the difference between consecutive terms is constant. That is, $b-a = c-b$. Let this common difference be $d$. So, $b-a = d$ and $c-b = d$.
We will use elementary row operations to simplify the determinant without changing its value.
Apply the operation $R_2 \to R_2 - R_1$:
The elements of the new second row are:
$(3y+5) - (2y+4) = y+1$
$(6y+8) - (5y+7) = y+1$
$(9y+b) - (8y+a) = y + (b-a)$
The determinant becomes:
$D = \begin{vmatrix} 2y+4 & 5y+7 & 8y+a \\ y+1 & y+1 & y+(b-a) \\ 4y+6 & 7y+9 & 10y+c \end{vmatrix}$
Now, apply the operation $R_3 \to R_3 - R_2$ (using the elements of the determinant after the previous operation):
The elements of the new third row are:
$(4y+6) - (3y+5) = y+1$
$(7y+9) - (6y+8) = y+1$
$(10y+c) - (9y+b) = y + (c-b)$
The determinant becomes:
$D = \begin{vmatrix} 2y+4 & 5y+7 & 8y+a \\ y+1 & y+1 & y+(b-a) \\ y+1 & y+1 & y+(c-b) \end{vmatrix}$
Now, substitute the A.P. property $b-a = c-b$. Let $d = b-a = c-b$.
$D = \begin{vmatrix} 2y+4 & 5y+7 & 8y+a \\ y+1 & y+1 & y+d \\ y+1 & y+1 & y+d \end{vmatrix}$
Observe the second and third rows of the determinant:
Row 2: $(y+1, y+1, y+d)$
Row 3: $(y+1, y+1, y+d)$
Since the second and third rows of the determinant are identical ($R_2 = R_3$), the value of the determinant is 0.
Therefore, $D = 0$.
The value of the determinant is $\mathbf{0}$.
Example 32: Show that
$∆ = \begin{vmatrix} (y+z)^2&xy&zx\\xy&(x+z)^2&yz\\xz&yz&(x+y)^2 \end{vmatrix} = 2xyz (x + y + z^)3$
Answer:
Given:
The determinant $∆ = \begin{vmatrix} (y+z)^2&xy&zx\\xy&(x+z)^2&yz\\xz&yz&(x+y)^2 \end{vmatrix}$.
To Show:
$∆ = 2xyz (x + y + z)^3$.
Solution:
Let the given determinant be $∆$.
$∆ = \begin{vmatrix} (y+z)^2&xy&zx\\xy&(x+z)^2&yz\\xz&yz&(x+y)^2 \end{vmatrix}$
Multiply $R_1$ by $x$, $R_2$ by $y$, $R_3$ by $z$. To keep the value of the determinant unchanged, we must divide by $xyz$.
$∆ = \frac{1}{xyz} \begin{vmatrix} x(y+z)^2&x^2y&zx^2\\xy^2&y(x+z)^2&y^2z\\xz^2&yz^2&z(x+y)^2 \end{vmatrix}$
Now, take out common factors $x$ from $C_1$, $y$ from $C_2$, and $z$ from $C_3$.
$∆ = \frac{xyz}{xyz} \begin{vmatrix} (y+z)^2&x^2&x^2\\y^2&(x+z)^2&y^2\\z^2&z^2&(x+y)^2 \end{vmatrix}$
$∆ = \begin{vmatrix} (y+z)^2&x^2&x^2\\y^2&(x+z)^2&y^2\\z^2&z^2&(x+y)^2 \end{vmatrix}$
Let this determinant be denoted by $D$. So, $∆ = D$. We will now evaluate $D$.
Finding factors of D:
Factor $x$: If we set $x=0$, the determinant becomes:
$D = \begin{vmatrix} (y+z)^2&0&0\\y^2&z^2&y^2\\z^2&z^2&y^2 \end{vmatrix} = (y+z)^2 (z^2y^2 - y^2z^2) = (y+z)^2 (0) = 0$.
Since $D=0$ when $x=0$, $x$ is a factor of $D$. By symmetry, $y$ and $z$ are also factors of $D$. Therefore, $xyz$ is a factor of $D$.
Factor $(x+y+z)$: If we set $x+y+z=0$, then $y+z=-x$, $x+z=-y$, $x+y=-z$.
$D = \begin{vmatrix} (-x)^2&x^2&x^2\\y^2&(-y)^2&y^2\\z^2&z^2&(-z)^2 \end{vmatrix} = \begin{vmatrix} x^2&x^2&x^2\\y^2&y^2&y^2\\z^2&z^2&z^2 \end{vmatrix}$
Since all three columns are identical, the value of the determinant is $0$. Thus, $(x+y+z)$ is a factor of $D$.
Factor $(x+y+z)^2$: Apply column operations $C_1 \to C_1 - C_3$ and $C_2 \to C_2 - C_3$ on $D$.
$D = \begin{vmatrix} (y+z)^2 - x^2 & x^2 - x^2 & x^2 \\ y^2 - y^2 & (x+z)^2 - y^2 & y^2 \\ z^2 - (x+y)^2 & z^2 - (x+y)^2 & (x+y)^2 \end{vmatrix}$
$D = \begin{vmatrix} (y+z-x)(y+z+x) & 0 & x^2 \\ 0 & (x+z-y)(x+z+y) & y^2 \\ (z-(x+y))(z+(x+y)) & (z-(x+y))(z+(x+y)) & (x+y)^2 \end{vmatrix}$
$D = \begin{vmatrix} (y+z-x)(x+y+z) & 0 & x^2 \\ 0 & (x+z-y)(x+y+z) & y^2 \\ -(x+y-z)(x+y+z) & -(x+y-z)(x+y+z) & (x+y)^2 \end{vmatrix}$
Take $(x+y+z)$ common from $C_1$ and $C_2$.
$D = (x+y+z)^2 \begin{vmatrix} y+z-x & 0 & x^2 \\ 0 & x+z-y & y^2 \\ -(x+y-z) & -(x+y-z) & (x+y)^2 \end{vmatrix}$
This shows that $(x+y+z)^2$ is a factor of $D$. Let $D'$ be the remaining determinant:
$D' = \begin{vmatrix} y+z-x & 0 & x^2 \\ 0 & x+z-y & y^2 \\ -(x+y-z) & -(x+y-z) & (x+y)^2 \end{vmatrix}$
Factor $(x+y+z)^3$: We need to check if $(x+y+z)$ is a factor of $D'$.
Set $x+y+z=0$. Then $y+z=-x$, $x+z=-y$, $x+y=-z$.
Also, $y+z-x = -x-x = -2x$.
$x+z-y = -y-y = -2y$.
$x+y-z = -z-z = -2z$.
Substitute these into $D'$:
$D' = \begin{vmatrix} -2x & 0 & x^2 \\ 0 & -2y & y^2 \\ -(-2z) & -(-2z) & (-z)^2 \end{vmatrix} = \begin{vmatrix} -2x & 0 & x^2 \\ 0 & -2y & y^2 \\ 2z & 2z & z^2 \end{vmatrix}$
Expand the determinant $D'$:
$D' = -2x \begin{vmatrix} -2y & y^2 \\ 2z & z^2 \end{vmatrix} - 0 + x^2 \begin{vmatrix} 0 & -2y \\ 2z & 2z \end{vmatrix}$
$D' = -2x ((-2y)(z^2) - (y^2)(2z)) + x^2 (0 - (-2y)(2z))$
$D' = -2x (-2yz^2 - 2y^2z) + x^2 (4yz)$
$D' = -2x (-2yz(z+y)) + 4x^2yz$
Since $x+y+z=0$, we have $z+y = -x$.
$D' = -2x (-2yz(-x)) + 4x^2yz$
$D' = -2x (2xyz) + 4x^2yz$
$D' = -4x^2yz + 4x^2yz = 0$.
Since $D'=0$ when $x+y+z=0$, $(x+y+z)$ is a factor of $D'$.
Therefore, $(x+y+z)^2 \times (x+y+z) = (x+y+z)^3$ is a factor of $D$.
Combining Factors:
We have shown that $x, y, z,$ and $(x+y+z)^3$ are factors of $D$. The degree of $D$ is 6 (e.g., the term $(y+z)^2(x+z)^2(x+y)^2$ has degree 6). The degree of the combined factor $xyz(x+y+z)^3$ is $1+1+1+3=6$.
Thus, $D$ must be a constant multiple of $xyz(x+y+z)^3$.
$D = k \cdot xyz (x+y+z)^3$ for some constant $k$.
Finding the constant k:
To find $k$, we substitute specific non-zero values for $x, y, z$. Let $x=1, y=1, z=1$.
$D = \begin{vmatrix} (1+1)^2&1^2&1^2\\1^2&(1+1)^2&1^2\\1^2&1^2&(1+1)^2 \end{vmatrix} = \begin{vmatrix} 4&1&1\\1&4&1\\1&1&4 \end{vmatrix}$
$D = 4(4 \times 4 - 1 \times 1) - 1(1 \times 4 - 1 \times 1) + 1(1 \times 1 - 4 \times 1)$
$D = 4(16 - 1) - 1(4 - 1) + 1(1 - 4)$
$D = 4(15) - 1(3) + 1(-3) = 60 - 3 - 3 = 54$.
Now substitute $x=1, y=1, z=1$ into the factored form:
$D = k \cdot (1)(1)(1) (1+1+1)^3 = k \cdot 1 \cdot (3)^3 = k \cdot 27$.
Equating the two values of $D$:
$27k = 54$
$k = \frac{54}{27} = 2$.
So, $D = 2xyz(x+y+z)^3$.
Conclusion:
Since $∆ = D$, we have shown that:
$∆ = 2xyz (x+y+z)^3$.
Hence Proved.
Example 33: Use product $\begin{bmatrix}1&1&2\\0&2&3\\3&2&4 \end{bmatrix} \begin{bmatrix}2&0&1\\9&2&3\\6&1&2 \end{bmatrix}$ to solve the system of equations
x – y + 2z = 1
2y – 3z = 1
3x – 2y + 4z = 2
Answer:
Given:
The matrix product: $P = \begin{bmatrix} 1 & -1 & 2 \\ 0 & 2 & -3 \\ 3 & -2 & 4 \end{bmatrix} \begin{bmatrix} -2 & 0 & 1 \\ 9 & 2 & -3 \\ 6 & 1 & -2 \end{bmatrix}$
The system of linear equations:
$x – y + 2z = 1$
$0x + 2y – 3z = 1$
$3x – 2y + 4z = 2$
To Find:
The solution (values of $x, y, z$) for the given system of equations using the given matrix product.
Solution:
First, let's calculate the given matrix product.
Let $A = \begin{bmatrix} 1 & -1 & 2 \\ 0 & 2 & -3 \\ 3 & -2 & 4 \end{bmatrix}$ and $B = \begin{bmatrix} -2 & 0 & 1 \\ 9 & 2 & -3 \\ 6 & 1 & -2 \end{bmatrix}$.
Product $AB = \begin{bmatrix} 1 & -1 & 2 \\ 0 & 2 & -3 \\ 3 & -2 & 4 \end{bmatrix} \begin{bmatrix} -2 & 0 & 1 \\ 9 & 2 & -3 \\ 6 & 1 & -2 \end{bmatrix}$
$AB = \begin{bmatrix} (1)(-2)+(-1)(9)+(2)(6) & (1)(0)+(-1)(2)+(2)(1) & (1)(1)+(-1)(-3)+(2)(-2) \\ (0)(-2)+(2)(9)+(-3)(6) & (0)(0)+(2)(2)+(-3)(1) & (0)(1)+(2)(-3)+(-3)(-2) \\ (3)(-2)+(-2)(9)+(4)(6) & (3)(0)+(-2)(2)+(4)(1) & (3)(1)+(-2)(-3)+(4)(-2) \end{bmatrix}$
$AB = \begin{bmatrix} -2-9+12 & 0-2+2 & 1+3-4 \\ 0+18-18 & 0+4-3 & 0-6+6 \\ -6-18+24 & 0-4+4 & 3+6-8 \end{bmatrix}$
$AB = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} = I$
So, the product of the given matrices is the identity matrix $I$.
Now, let's represent the given system of equations in matrix form $CX = D$.
$x – y + 2z = 1$
$0x + 2y – 3z = 1$
$3x – 2y + 4z = 2$
The coefficient matrix is $C = \begin{bmatrix} 1 & -1 & 2 \\ 0 & 2 & -3 \\ 3 & -2 & 4 \end{bmatrix}$.
The variable matrix is $X = \begin{bmatrix} x \\ y \\ z \end{bmatrix}$.
The constant matrix is $D = \begin{bmatrix} 1 \\ 1 \\ 2 \end{bmatrix}$.
The system is $CX = D$.
We observe that the coefficient matrix $C$ is the same as the first matrix $A$ in the given product.
$C = A = \begin{bmatrix} 1 & -1 & 2 \\ 0 & 2 & -3 \\ 3 & -2 & 4 \end{bmatrix}$
From the product calculation, we found that $AB = I$. Since $C=A$, we have $CB = I$.
By the definition of the inverse of a matrix, if $CB = I$, then $B$ is the inverse of $C$.
So, $C^{-1} = B = \begin{bmatrix} -2 & 0 & 1 \\ 9 & 2 & -3 \\ 6 & 1 & -2 \end{bmatrix}$.
To solve the system $CX = D$, we multiply both sides by $C^{-1}$ on the left:
$C^{-1}(CX) = C^{-1}D$
$(C^{-1}C)X = C^{-1}D$
$IX = C^{-1}D$
$X = C^{-1}D$
Now, substitute the matrices for $C^{-1}$ and $D$:
$X = \begin{bmatrix} -2 & 0 & 1 \\ 9 & 2 & -3 \\ 6 & 1 & -2 \end{bmatrix} \begin{bmatrix} 1 \\ 1 \\ 2 \end{bmatrix}$
Perform the matrix multiplication:
$X = \begin{bmatrix} (-2)(1)+(0)(1)+(1)(2) \\ (9)(1)+(2)(1)+(-3)(2) \\ (6)(1)+(1)(1)+(-2)(2) \end{bmatrix}$
$X = \begin{bmatrix} -2+0+2 \\ 9+2-6 \\ 6+1-4 \end{bmatrix}$
$X = \begin{bmatrix} 0 \\ 5 \\ 3 \end{bmatrix}$
Since $X = \begin{bmatrix} x \\ y \\ z \end{bmatrix}$, we have:
$x = 0, y = 5, z = 3$
Therefore, the solution to the given system of equations is $x=0$, $y=5$, and $z=3$.
Example 34: Prove that
$∆ = \begin{vmatrix} a+bx&c+dx&p+qx\\ax+b&cx+d&px+q\\u&v&w \end{vmatrix}$ = $(1 - x^2) \begin{vmatrix} a&c&p\\b&d&q\\u&v&w \end{vmatrix}$
Answer:
Given:
The determinant $∆ = \begin{vmatrix} a+bx&c+dx&p+qx\\ax+b&cx+d&px+q\\u&v&w \end{vmatrix}$.
To Prove:
$∆ = (1 - x^2) \begin{vmatrix} a&c&p\\b&d&q\\u&v&w \end{vmatrix}$.
Proof:
We start with the given determinant:
$∆ = \begin{vmatrix} a+bx&c+dx&p+qx\\ax+b&cx+d&px+q\\u&v&w \end{vmatrix}$
Apply the row operation $R_1 \to R_1 - xR_2$.
The elements of the new $R_1$ will be:
$(a+bx) - x(ax+b) = a+bx-ax^2-bx = a - ax^2 = a(1-x^2)$
$(c+dx) - x(cx+d) = c+dx-cx^2-dx = c - cx^2 = c(1-x^2)$
$(p+qx) - x(px+q) = p+qx-px^2-qx = p - px^2 = p(1-x^2)$
So the determinant becomes:
$∆ = \begin{vmatrix} a(1-x^2)&c(1-x^2)&p(1-x^2)\\ax+b&cx+d&px+q\\u&v&w \end{vmatrix}$
Take the common factor $(1-x^2)$ out from the first row ($R_1$).
$∆ = (1 - x^2) \begin{vmatrix} a&c&p\\ax+b&cx+d&px+q\\u&v&w \end{vmatrix}$
Now, apply the row operation $R_2 \to R_2 - xR_1$ on the determinant inside the bracket.
The elements of the new $R_2$ will be:
$(ax+b) - x(a) = ax+b-ax = b$
$(cx+d) - x(c) = cx+d-cx = d$
$(px+q) - x(p) = px+q-px = q$
So the determinant inside the bracket becomes:
$\begin{vmatrix} a&c&p\\b&d&q\\u&v&w \end{vmatrix}$
Substituting this back, we get:
$∆ = (1 - x^2) \begin{vmatrix} a&c&p\\b&d&q\\u&v&w \end{vmatrix}$
Hence Proved.
Miscellaneous Exercises on Chapter 4
Question 1. Prove that the determinant $\begin{vmatrix} x&\sinθ&\cosθ\\−\sinθ&−x&1\\\cosθ&1&x \end{vmatrix}$ is independent of θ.
Answer:
Let the given determinant be denoted by $D$.
Given: The determinant $D = \begin{vmatrix} x&\sinθ&\cosθ\\−\sinθ&−x&1\\\cosθ&1&x \end{vmatrix}$.
To Prove: The determinant $D$ is independent of $\theta$.
Solution:
We will expand the determinant along the first row ($R_1$).
$D = x \begin{vmatrix} -x & 1 \\ 1 & x \end{vmatrix} - \sin\theta \begin{vmatrix} -\sin\theta & 1 \\ \cos\theta & x \end{vmatrix} + \cos\theta \begin{vmatrix} -\sin\theta & -x \\ \cos\theta & 1 \end{vmatrix}$
Now, we evaluate the $2 \times 2$ determinants:
$D = x((-x)(x) - (1)(1)) - \sin\theta((-\sin\theta)(x) - (1)(\cos\theta)) $$ + \cos\theta((-\sin\theta)(1) - (-x)(\cos\theta))$
Simplify the expression:
$D = x(-x^2 - 1) - \sin\theta(-x\sin\theta - \cos\theta) + \cos\theta(-\sin\theta + x\cos\theta)$
Distribute the terms:
$D = -x^3 - x + (- \sin\theta)(-x\sin\theta) + (-\sin\theta)(-\cos\theta) $$ + (\cos\theta)(-\sin\theta) + (\cos\theta)(x\cos\theta)$
$D = -x^3 - x + x\sin^2\theta + \sin\theta\cos\theta - \sin\theta\cos\theta + x\cos^2\theta$
Cancel the $\sin\theta\cos\theta$ and $-\sin\theta\cos\theta$ terms:
$D = -x^3 - x + x\sin^2\theta + x\cos^2\theta$
Factor out $x$ from the terms involving $\theta$:
$D = -x^3 - x + x(\sin^2\theta + \cos^2\theta)$
Using the fundamental trigonometric identity $\sin^2\theta + \cos^2\theta = 1$:
$D = -x^3 - x + x(1)$
$D = -x^3 - x + x$
Simplify further:
$D = -x^3$
The value of the determinant is $-x^3$, which does not contain the variable $\theta$.
Therefore, the determinant is independent of $\theta$.
Hence, Proved.
Question 2. Without expanding the determinant, prove that $\begin{vmatrix} a&a^2&bc\\b&b^2&ca\\c&c^2&ab \end{vmatrix} = \begin{vmatrix} 1&a^2&a^3\\1&b^2&b^3\\1&c^2&c^3 \end{vmatrix}$ .
Answer:
Given:
Two determinants: $D_1 = \begin{vmatrix} a&a^2&bc\\b&b^2&ca\\c&c^2&ab \end{vmatrix}$ and $D_2 = \begin{vmatrix} 1&a^2&a^3\\1&b^2&b^3\\1&c^2&c^3 \end{vmatrix}$.
To Prove:
$D_1 = D_2$ without expanding the determinants.
Proof:
Consider the Left Hand Side determinant $D_1 = \begin{vmatrix} a&a^2&bc\\b&b^2&ca\\c&c^2&ab \end{vmatrix}$.
Multiply $R_1$ by $a$, $R_2$ by $b$, and $R_3$ by $c$. When a row of a determinant is multiplied by a scalar, the value of the determinant is multiplied by the same scalar. Thus, the new determinant is $abc$ times the original determinant $D_1$.
$abc \cdot D_1 = \begin{vmatrix} a \cdot a & a \cdot a^2 & a \cdot bc \\ b \cdot b & b \cdot b^2 & b \cdot ca \\ c \cdot c & c \cdot c^2 & c \cdot ab \end{vmatrix} = \begin{vmatrix} a^2 & a^3 & abc \\ b^2 & b^3 & abc \\ c^2 & c^3 & abc \end{vmatrix}$.
Now, observe the third column ($C_3$) of the resulting determinant $\begin{vmatrix} a^2 & a^3 & abc \\ b^2 & b^3 & abc \\ c^2 & c^3 & abc \end{vmatrix}$. The element $abc$ is common to all entries in this column.
We can take $abc$ common from $C_3$. When a common factor is taken out from a column (or row), the determinant is divided by that factor.
So, $\begin{vmatrix} a^2 & a^3 & abc \\ b^2 & b^3 & abc \\ c^2 & c^3 & abc \end{vmatrix} = abc \begin{vmatrix} a^2 & a^3 & 1 \\ b^2 & b^3 & 1 \\ c^2 & c^3 & 1 \end{vmatrix}$.
Combining the steps, we have $abc \cdot D_1 = abc \cdot \begin{vmatrix} a^2 & a^3 & 1 \\ b^2 & b^3 & 1 \\ c^2 & c^3 & 1 \end{vmatrix}$.
If $abc \neq 0$, we can divide both sides by $abc$, giving $D_1 = \begin{vmatrix} a^2 & a^3 & 1 \\ b^2 & b^3 & 1 \\ c^2 & c^3 & 1 \end{vmatrix}$. If $abc = 0$, the equality holds as shown by direct expansion (if $a=0$, both determinants equal $b^2c^2(c-b)$ etc.).
Thus, $D_1 = \begin{vmatrix} a^2 & a^3 & 1 \\ b^2 & b^3 & 1 \\ c^2 & c^3 & 1 \end{vmatrix}$.
Let's call the resulting determinant $D' = \begin{vmatrix} a^2 & a^3 & 1 \\ b^2 & b^3 & 1 \\ c^2 & c^3 & 1 \end{vmatrix}$.
We need to transform $D'$ into $D_2 = \begin{vmatrix} 1&a^2&a^3\\1&b^2&b^3\\1&c^2&c^3 \end{vmatrix}$ using column operations.
The columns in $D'$ are $C_1 = \begin{pmatrix} a^2 \\ b^2 \\ c^2 \end{pmatrix}$, $C_2 = \begin{pmatrix} a^3 \\ b^3 \\ c^3 \end{pmatrix}$, $C_3 = \begin{pmatrix} 1 \\ 1 \\ 1 \end{pmatrix}$.
The columns in $D_2$ are $C'_1 = \begin{pmatrix} 1 \\ 1 \\ 1 \end{pmatrix}$, $C'_2 = \begin{pmatrix} a^2 \\ b^2 \\ c^2 \end{pmatrix}$, $C'_3 = \begin{pmatrix} a^3 \\ b^3 \\ c^3 \end{pmatrix}$.
We need to rearrange the columns of $D'$ from the order $(C_1, C_2, C_3)$ to $(C_3, C_1, C_2)$.
Perform the column swap $C_1 \leftrightarrow C_3$ on $D'$. Swapping two columns of a determinant multiplies its value by $-1$.
$D' = - \begin{vmatrix} 1 & a^3 & a^2 \\ 1 & b^3 & b^2 \\ 1 & c^3 & c^2 \end{vmatrix}$.
Now, perform the column swap $C_2 \leftrightarrow C_3$ on the new determinant $\begin{vmatrix} 1 & a^3 & a^2 \\ 1 & b^3 & b^2 \\ 1 & c^3 & c^2 \end{vmatrix}$. This again multiplies the value by $-1$.
$- \begin{vmatrix} 1 & a^3 & a^2 \\ 1 & b^3 & b^2 \\ 1 & c^3 & c^2 \end{vmatrix} = - (-1) \begin{vmatrix} 1 & a^2 & a^3 \\ 1 & b^2 & b^3 \\ 1 & c^2 & c^3 \end{vmatrix} = \begin{vmatrix} 1 & a^2 & a^3 \\ 1 & b^2 & b^3 \\ 1 & c^2 & c^3 \end{vmatrix}$.
The resulting determinant is $\begin{vmatrix} 1&a^2&a^3\\1&b^2&b^3\\1&c^2&c^3 \end{vmatrix}$, which is exactly the Right Hand Side determinant $D_2$.
Thus, we have shown that $D_1 = D' = D_2$.
Therefore, $\begin{vmatrix} a&a^2&bc\\b&b^2&ca\\c&c^2&ab \end{vmatrix} = \begin{vmatrix} 1&a^2&a^3\\1&b^2&b^3\\1&c^2&c^3 \end{vmatrix}$.
Hence, Proved.
Question 3. Evaluate $\begin{vmatrix} \cosα \cosβ &\cosα \sinβ &−\sinα\\−\sinβ &\cosβ &0\\\sinα \cosβ &\sinα \sinβ &\cosα \end{vmatrix}$
Answer:
Given:
The determinant to be evaluated is:
∆ = $\begin{vmatrix} \cos\alpha \cos\beta & \cos\alpha \sin\beta & −\sin\alpha\\−\sin\beta & \cos\beta & 0\\\sin\alpha \cos\beta & \sin\alpha \sin\beta & \cos\alpha \end{vmatrix}$
To Evaluate:
The value of the determinant ∆.
Solution:
We can evaluate the determinant by expanding it along any row or column. Expanding along the second row ($R_2$) is convenient because it contains a zero element, which simplifies the calculation.
The expansion of the determinant along the second row is given by:
∆ = $(-\sin\beta) \cdot C_{21} + (\cos\beta) \cdot C_{22} + (0) \cdot C_{23}$
Where $C_{ij}$ is the cofactor of the element in the $i^{th}$ row and $j^{th}$ column.
First, we find the cofactor $C_{21}$:
$C_{21} = (-1)^{2+1} \begin{vmatrix} \cos\alpha \sin\beta & -\sin\alpha \\ \sin\alpha \sin\beta & \cos\alpha \end{vmatrix}$
$C_{21} = -1 [(\cos\alpha \sin\beta)(\cos\alpha) - (-\sin\alpha)(\sin\alpha \sin\beta)]$
$C_{21} = -[\cos^2\alpha \sin\beta + \sin^2\alpha \sin\beta]$
$C_{21} = -\sin\beta(\cos^2\alpha + \sin^2\alpha)$
Using the trigonometric identity $\cos^2\alpha + \sin^2\alpha = 1$, we get:
$C_{21} = -\sin\beta$
Next, we find the cofactor $C_{22}$:
$C_{22} = (-1)^{2+2} \begin{vmatrix} \cos\alpha \cos\beta & -\sin\alpha \\ \sin\alpha \cos\beta & \cos\alpha \end{vmatrix}$
$C_{22} = 1 [(\cos\alpha \cos\beta)(\cos\alpha) - (-\sin\alpha)(\sin\alpha \cos\beta)]$
$C_{22} = [\cos^2\alpha \cos\beta + \sin^2\alpha \cos\beta]$
$C_{22} = \cos\beta(\cos^2\alpha + \sin^2\alpha)$
Using the identity $\cos^2\alpha + \sin^2\alpha = 1$, we get:
$C_{22} = \cos\beta$
The third term in the expansion is zero, so we don't need to calculate $C_{23}$.
Now, substitute the values back into the expansion formula:
∆ = $(-\sin\beta) \cdot (-\sin\beta) + (\cos\beta) \cdot (\cos\beta) + 0$
∆ = $\sin^2\beta + \cos^2\beta$
Using the fundamental trigonometric identity $\sin^2\beta + \cos^2\beta = 1$:
∆ = $1$
Thus, the value of the given determinant is 1.
Question 4. If a, b and c are real numbers, and
$∆ = \begin{vmatrix} b+c&c+a&a+b\\c+a&a+b&b+c\\a+b&b+c&c+a \end{vmatrix} = 0$ ,
Show that either a + b + c = 0 or a = b = c.
Answer:
Given:
The determinant $\Delta = \begin{vmatrix} b+c&c+a&a+b\\c+a&a+b&b+c\\a+b&b+c&c+a \end{vmatrix} = 0$, where $a, b, c$ are real numbers.
To Show:
Either $a+b+c=0$ or $a=b=c$.
Solution:
Consider the given determinant:
$\Delta = \begin{vmatrix} b+c&c+a&a+b\\c+a&a+b&b+c\\a+b&b+c&c+a \end{vmatrix}$
Apply the row operation $R_1 \to R_1 + R_2 + R_3$:
$\Delta = \begin{vmatrix} (b+c)+(c+a)+(a+b)&(c+a)+(a+b)+(b+c)&(a+b)+(b+c)+(c+a)\\c+a&a+b&b+c\\a+b&b+c&c+a \end{vmatrix}$
$\Delta = \begin{vmatrix} 2(a+b+c)&2(a+b+c)&2(a+b+c)\\c+a&a+b&b+c\\a+b&b+c&c+a \end{vmatrix}$
Take out the common factor $2(a+b+c)$ from $R_1$:
$\Delta = 2(a+b+c) \begin{vmatrix} 1&1&1\\c+a&a+b&b+c\\a+b&b+c&c+a \end{vmatrix}$
Apply the column operations $C_2 \to C_2 - C_1$ and $C_3 \to C_3 - C_1$:
$\Delta = 2(a+b+c) \begin{vmatrix} 1&1-1&1-1\\c+a&(a+b)-(c+a)&(b+c)-(c+a)\\a+b&(b+c)-(a+b)&(c+a)-(a+b) \end{vmatrix}$
$\Delta = 2(a+b+c) \begin{vmatrix} 1&0&0\\c+a&b-c&b-a\\a+b&c-a&c-b \end{vmatrix}$
Expand the determinant along the first row ($R_1$):
$\Delta = 2(a+b+c) \left[ 1 \cdot \begin{vmatrix} b-c&b-a\\c-a&c-b \end{vmatrix} - 0 + 0 \right]$
$\Delta = 2(a+b+c) [ (b-c)(c-b) - (b-a)(c-a) ]$
$\Delta = 2(a+b+c) [ -(b-c)^2 - (bc - ab - ac + a^2) ]$
$\Delta = 2(a+b+c) [ -(b^2 - 2bc + c^2) - bc + ab + ac - a^2 ]$
$\Delta = 2(a+b+c) [ -b^2 + 2bc - c^2 - bc + ab + ac - a^2 ]$
$\Delta = 2(a+b+c) [ -a^2 - b^2 - c^2 + ab + bc + ac ]$
$\Delta = -2(a+b+c) [ a^2 + b^2 + c^2 - ab - bc - ac ]$
We are given that $\Delta = 0$. So,
$-2(a+b+c) [ a^2 + b^2 + c^2 - ab - bc - ac ] = 0$
Since $-2 \neq 0$, this equation holds if and only if either $(a+b+c) = 0$ or $(a^2 + b^2 + c^2 - ab - bc - ac) = 0$.
Case 1: $a+b+c = 0$.
This is one of the conditions we need to show.
Case 2: $a^2 + b^2 + c^2 - ab - bc - ac = 0$.
Multiply the equation by 2:
$2(a^2 + b^2 + c^2 - ab - bc - ac) = 0$
$2a^2 + 2b^2 + 2c^2 - 2ab - 2bc - 2ac = 0$
Rearrange the terms to form perfect squares:
$(a^2 - 2ab + b^2) + (b^2 - 2bc + c^2) + (c^2 - 2ac + a^2) = 0$
$(a-b)^2 + (b-c)^2 + (c-a)^2 = 0$
Since $a, b, c$ are real numbers, the squares of the differences $(a-b), (b-c), (c-a)$ are non-negative.
$(a-b)^2 \ge 0$, $(b-c)^2 \ge 0$, and $(c-a)^2 \ge 0$.
The sum of non-negative terms can be zero only if each individual term is zero.
So, $(a-b)^2 = 0$, $(b-c)^2 = 0$, and $(c-a)^2 = 0$.
This implies $a-b = 0$, $b-c = 0$, and $c-a = 0$.
From these equations, we get $a=b$, $b=c$, and $c=a$. Therefore, $a=b=c$.
Thus, if $\Delta = 0$, it implies that either $a+b+c = 0$ or $a=b=c$.
Hence, Shown.
Question 5. Solve the equation $\begin{vmatrix} x+a&x&x\\x&x+a&x\\x&x&x+a \end{vmatrix} = 0, \;a ≠ 0$
Answer:
Let the given determinant be $D$.
$D = \begin{vmatrix} x+a&x&x\\x&x+a&x\\x&x&x+a \end{vmatrix}$
We are given that $D = 0$ and $a \neq 0$.
Solution:
Apply the operation $R_1 \to R_1 + R_2 + R_3$ to the determinant:
$D = \begin{vmatrix} (x+a)+x+x&x+(x+a)+x&x+x+(x+a)\\x&x+a&x\\x&x&x+a \end{vmatrix}$
$D = \begin{vmatrix} 3x+a&3x+a&3x+a\\x&x+a&x\\x&x&x+a \end{vmatrix}$
Take out the common factor $(3x+a)$ from $R_1$:
$D = (3x+a) \begin{vmatrix} 1&1&1\\x&x+a&x\\x&x&x+a \end{vmatrix}$
Apply the operations $C_2 \to C_2 - C_1$ and $C_3 \to C_3 - C_1$:
$D = (3x+a) \begin{vmatrix} 1&1-1&1-1\\x&(x+a)-x&x-x\\x&x-x&(x+a)-x \end{vmatrix}$
$D = (3x+a) \begin{vmatrix} 1&0&0\\x&a&0\\x&0&a \end{vmatrix}$
Expand the determinant along the first row ($R_1$):
$D = (3x+a) \left[ 1 \cdot \begin{vmatrix} a&0\\0&a \end{vmatrix} - 0 \cdot \begin{vmatrix} x&0\\x&a \end{vmatrix} + 0 \cdot \begin{vmatrix} x&a\\x&0 \end{vmatrix} \right]$
$D = (3x+a) [ (a)(a) - (0)(0) ]$
$D = (3x+a)(a^2)$
We are given that $D = 0$.
So, $(3x+a)(a^2) = 0$.
Since $a \neq 0$, we have $a^2 \neq 0$.
For the product $(3x+a)(a^2)$ to be zero, the factor $(3x+a)$ must be zero.
$3x+a = 0$
$3x = -a$
$x = -\frac{a}{3}$
The solution to the equation is $\mathbf{x = -\frac{a}{3}}$.
Question 6. Prove that $\begin{vmatrix} a^2&bc&ac+c^2\\a^2+ab&b^2&ac\\ab&b^2+bc&c^2 \end{vmatrix} = 4a^2b^2c^2$
Answer:
To Prove:
$\begin{vmatrix} a^2&bc&ac+c^2\\a^2+ab&b^2&ac\\ab&b^2+bc&c^2 \end{vmatrix} = 4a^2b^2c^2$
Proof:
Consider the Left Hand Side (LHS) of the equation:
$LHS = \begin{vmatrix} a^2&bc&ac+c^2\\a^2+ab&b^2&ac\\ab&b^2+bc&c^2 \end{vmatrix}$
Take out the common factor $a$ from $C_1$, $b$ from $C_2$, and $c$ from $C_3$.
$LHS = abc \begin{vmatrix} a&c&a+c\\a+b&b&a\\b&b+c&c \end{vmatrix}$
Apply the column operation $C_1 \to C_1 + C_2 + C_3$:
$LHS = abc \begin{vmatrix} a+c+(a+c)&c&a+c\\(a+b)+b+a&b&a\\b+(b+c)+c&b+c&c \end{vmatrix}$
$LHS = abc \begin{vmatrix} 2a+2c&c&a+c\\2a+2b&b&a\\2b+2c&b+c&c \end{vmatrix}$
Take out the common factor $2$ from $C_1$:
$LHS = 2abc \begin{vmatrix} a+c&c&a+c\\a+b&b&a\\b+c&b+c&c \end{vmatrix}$
Apply the column operation $C_1 \to C_1 - C_3$:
$LHS = 2abc \begin{vmatrix} (a+c)-(a+c)&c&a+c\\(a+b)-a&b&a\\(b+c)-c&b+c&c \end{vmatrix}$
$LHS = 2abc \begin{vmatrix} 0&c&a+c\\b&b&a\\b&b+c&c \end{vmatrix}$
Expand the determinant along the first column ($C_1$). Note that the first element is 0.
$LHS = 2abc \left[ 0 \cdot \begin{vmatrix} b&a\\b+c&c \end{vmatrix} - b \cdot \begin{vmatrix} c&a+c\\b+c&c \end{vmatrix} + b \cdot \begin{vmatrix} c&a+c\\b&a \end{vmatrix} \right]$
$LHS = 2abc \left[ -b \left( c(c) - (a+c)(b+c) \right) + b \left( c(a) - (a+c)(b) \right) \right]$
$LHS = 2abc \left[ -b \left( c^2 - (ab + ac + bc + c^2) \right) + b \left( ac - (ab + bc) \right) \right]$
$LHS = 2abc \left[ -b \left( c^2 - ab - ac - bc - c^2 \right) + b \left( ac - ab - bc \right) \right]$
$LHS = 2abc \left[ -b \left( - ab - ac - bc \right) + b \left( ac - ab - bc \right) \right]$
Distribute $-b$ in the first term and $b$ in the second term inside the square brackets:
$LHS = 2abc \left[ (ab^2 + abc + b^2c) + (abc - ab^2 - b^2c) \right]$
Combine like terms inside the square brackets:
$LHS = 2abc \left[ ab^2 - ab^2 + abc + abc + b^2c - b^2c \right]$
$LHS = 2abc \left[ 2abc \right]$
$LHS = 4a^2b^2c^2$
This is equal to the Right Hand Side (RHS).
Therefore, $\begin{vmatrix} a^2&bc&ac+c^2\\a^2+ab&b^2&ac\\ab&b^2+bc&c^2 \end{vmatrix} = 4a^2b^2c^2$.
Hence, Proved.
Question 7. If A–1 = $\begin{vmatrix} 3&−1&1\\−15&6&−5\\5&−2&2 \end{vmatrix}$ and B = $\begin{vmatrix} 1&2&−2\\−1&3&0\\0&−2&1 \end{vmatrix}$ , find (AB)-1.
Answer:
Given:
$A^{-1} = \begin{pmatrix} 3 & -1 & 1 \\ -15 & 6 & -5 \\ 5 & -2 & 2 \end{pmatrix}$
$B = \begin{pmatrix} 1 & 2 & -2 \\ -1 & 3 & 0 \\ 0 & -2 & 1 \end{pmatrix}$
To Find:
$(AB)^{-1}$
Solution:
We know that for two invertible matrices A and B, the inverse of their product is given by the formula: $(AB)^{-1} = B^{-1}A^{-1}$.
We are given $A^{-1}$, so we need to find $B^{-1}$.
To find $B^{-1}$, we use the formula $B^{-1} = \frac{1}{|B|} \text{adj}(B)$.
First, calculate the determinant of B, $|B|$. We will expand along the first row ($R_1$):
$|B| = 1 \begin{vmatrix} 3 & 0 \\ -2 & 1 \end{vmatrix} - 2 \begin{vmatrix} -1 & 0 \\ 0 & 1 \end{vmatrix} + (-2) \begin{vmatrix} -1 & 3 \\ 0 & -2 \end{vmatrix}$
$|B| = 1((3)(1) - (0)(-2)) - 2((-1)(1) - (0)(0)) $$ - 2((-1)(-2) - (3)(0))$
$|B| = 1(3 - 0) - 2(-1 - 0) - 2(2 - 0)$
$|B| = 3 - 2(-1) - 2(2)$
$|B| = 3 + 2 - 4$
$|B| = 1$
Since $|B| = 1 \neq 0$, $B$ is invertible, and $B^{-1}$ exists.
Next, find the adjoint of B, adj(B). First, find the cofactor matrix of B. Let $C_{ij}$ be the cofactor of the element in the $i$-th row and $j$-th column of B.
$C_{11} = + \begin{vmatrix} 3 & 0 \\ -2 & 1 \end{vmatrix} = (3)(1) - (0)(-2) = 3$
$C_{12} = - \begin{vmatrix} -1 & 0 \\ 0 & 1 \end{vmatrix} = -((-1)(1) - (0)(0)) = -(-1) = 1$
$C_{13} = + \begin{vmatrix} -1 & 3 \\ 0 & -2 \end{vmatrix} = (-1)(-2) - (3)(0) = 2 - 0 = 2$
$C_{21} = - \begin{vmatrix} 2 & -2 \\ -2 & 1 \end{vmatrix} = -((2)(1) - (-2)(-2)) = -(2 - 4) $$ = -(-2) = 2$
$C_{22} = + \begin{vmatrix} 1 & -2 \\ 0 & 1 \end{vmatrix} = (1)(1) - (-2)(0) = 1 - 0 = 1$
$C_{23} = - \begin{vmatrix} 1 & 2 \\ 0 & -2 \end{vmatrix} = -((1)(-2) - (2)(0)) = -(-2 - 0) $$ = -(-2) = 2$
$C_{31} = + \begin{vmatrix} 2 & -2 \\ 3 & 0 \end{vmatrix} = (2)(0) - (-2)(3) = 0 - (-6) = 6$
$C_{32} = - \begin{vmatrix} 1 & -2 \\ -1 & 0 \end{vmatrix} = -((1)(0) - (-2)(-1)) = -(0 - 2) $$ = -(-2) = 2$
$C_{33} = + \begin{vmatrix} 1 & 2 \\ -1 & 3 \end{vmatrix} = (1)(3) - (2)(-1) = 3 - (-2) = 3 + 2 = 5$
The cofactor matrix of B is $\begin{pmatrix} 3 & 1 & 2 \\ 2 & 1 & 2 \\ 6 & 2 & 5 \end{pmatrix}$.
The adjoint of B is the transpose of the cofactor matrix:
$\text{adj}(B) = \begin{pmatrix} 3 & 1 & 2 \\ 2 & 1 & 2 \\ 6 & 2 & 5 \end{pmatrix}^T = \begin{pmatrix} 3 & 2 & 6 \\ 1 & 1 & 2 \\ 2 & 2 & 5 \end{pmatrix}$
Now, calculate $B^{-1}$:
$B^{-1} = \frac{1}{|B|} \text{adj}(B) = \frac{1}{1} \begin{pmatrix} 3 & 2 & 6 \\ 1 & 1 & 2 \\ 2 & 2 & 5 \end{pmatrix} = \begin{pmatrix} 3 & 2 & 6 \\ 1 & 1 & 2 \\ 2 & 2 & 5 \end{pmatrix}$
Finally, calculate $(AB)^{-1} = B^{-1}A^{-1}$:
$(AB)^{-1} = \begin{pmatrix} 3 & 2 & 6 \\ 1 & 1 & 2 \\ 2 & 2 & 5 \end{pmatrix} \begin{pmatrix} 3 & -1 & 1 \\ -15 & 6 & -5 \\ 5 & -2 & 2 \end{pmatrix}$
Perform matrix multiplication:
$(AB)^{-1} = \begin{pmatrix} (3)(3)+(2)(-15)+(6)(5) & (3)(-1)+(2)(6)+(6)(-2) & (3)(1)+(2)(-5)+(6)(2) \\ (1)(3)+(1)(-15)+(2)(5) & (1)(-1)+(1)(6)+(2)(-2) & (1)(1)+(1)(-5)+(2)(2) \\ (2)(3)+(2)(-15)+(5)(5) & (2)(-1)+(2)(6)+(5)(-2) & (2)(1)+(2)(-5)+(5)(2) \end{pmatrix}$
$(AB)^{-1} = \begin{pmatrix} 9-30+30 & -3+12-12 & 3-10+12 \\ 3-15+10 & -1+6-4 & 1-5+4 \\ 6-30+25 & -2+12-10 & 2-10+10 \end{pmatrix}$
$(AB)^{-1} = \begin{pmatrix} 9 & -3 & 5 \\ -2 & 1 & 0 \\ 1 & 0 & 2 \end{pmatrix}$
The value of $(AB)^{-1}$ is $\begin{pmatrix} 9 & -3 & 5 \\ -2 & 1 & 0 \\ 1 & 0 & 2 \end{pmatrix}$.
Question 8. Let A = $\begin{vmatrix} 1&2&1\\2&3&1\\1&1&5 \end{vmatrix}$ . Verify that
(i) [adj A]–1 = adj (A–1)
(ii) (A–1)–1 = A
Answer:
Given:
The matrix $A = \begin{bmatrix} 1&2&1\\2&3&1\\1&1&5 \end{bmatrix}$.
Initial Calculations (Required for both parts):
1. Determinant of A (|A|):
$|A| = 1(3 \cdot 5 - 1 \cdot 1) - 2(2 \cdot 5 - 1 \cdot 1) + 1(2 \cdot 1 - 3 \cdot 1)$
$|A| = 1(15 - 1) - 2(10 - 1) + 1(2 - 3)$
$|A| = 14 - 2(9) - 1 = 14 - 18 - 1 = -5$.
Since $|A| \neq 0$, A is invertible.
2. Adjoint of A (adj A):
We find the cofactors:
$C_{11} = (14)$, $C_{12} = -(9) = -9$, $C_{13} = (-1)$
$C_{21} = -(9) = -9$, $C_{22} = (4)$, $C_{23} = -(-1) = 1$
$C_{31} = (-1)$, $C_{32} = -(-1) = 1$, $C_{33} = (-1)$
Cofactor Matrix = $\begin{bmatrix} 14 & -9 & -1 \\ -9 & 4 & 1 \\ -1 & 1 & -1 \end{bmatrix}$
adj A = (Cofactor Matrix)T = $\begin{bmatrix} 14 & -9 & -1 \\ -9 & 4 & 1 \\ -1 & 1 & -1 \end{bmatrix}$
3. Inverse of A (A–1):
$A^{-1} = \frac{1}{|A|} \text{adj A} = \frac{1}{-5} \begin{bmatrix} 14 & -9 & -1 \\ -9 & 4 & 1 \\ -1 & 1 & -1 \end{bmatrix} = \begin{bmatrix} -14/5 & 9/5 & 1/5 \\ 9/5 & -4/5 & -1/5 \\ 1/5 & -1/5 & 1/5 \end{bmatrix}$
Part (i): Verify [adj A]–1 = adj (A–1)
LHS: [adj A]–1
Let B = adj A = $\begin{bmatrix} 14 & -9 & -1 \\ -9 & 4 & 1 \\ -1 & 1 & -1 \end{bmatrix}$. We need to find B–1.
First, find |B| = |adj A|. We know the property $| \text{adj A} | = |A|^{n-1}$. Here n=3.
$|\text{adj A}| = (-5)^{3-1} = (-5)^2 = 25$.
Next, find adj(B) = adj(adj A). We know adj(adj A) = $|A|^{n-2} A$.
adj(adj A) = $(-5)^{3-2} A = -5A = -5 \begin{bmatrix} 1&2&1\\2&3&1\\1&1&5 \end{bmatrix} $$ = \begin{bmatrix} -5 & -10 & -5 \\ -10 & -15 & -5 \\ -5 & -5 & -25 \end{bmatrix}$.
[adj A]–1 = $\frac{1}{|\text{adj A}|} \text{adj(adj A)} = \frac{1}{25} \begin{bmatrix} -5 & -10 & -5 \\ -10 & -15 & -5 \\ -5 & -5 & -25 \end{bmatrix} $$ = \begin{bmatrix} -1/5 & -2/5 & -1/5 \\ -2/5 & -3/5 & -1/5 \\ -1/5 & -1/5 & -1 \end{bmatrix}$.
RHS: adj (A–1)
Let C = A–1 = $\frac{1}{-5} \begin{bmatrix} 14 & -9 & -1 \\ -9 & 4 & 1 \\ -1 & 1 & -1 \end{bmatrix}$.
We need to find adj(C). We know the property adj(kA) = kn-1adj(A).
adj(A–1) = adj($\frac{1}{-5}$ adj A) = $(\frac{1}{-5})^{3-1}$ adj(adj A) = $\frac{1}{25}$ adj(adj A).
We already found adj(adj A) = $\begin{bmatrix} -5 & -10 & -5 \\ -10 & -15 & -5 \\ -5 & -5 & -25 \end{bmatrix}$.
So, adj (A–1) = $\frac{1}{25} \begin{bmatrix} -5 & -10 & -5 \\ -10 & -15 & -5 \\ -5 & -5 & -25 \end{bmatrix} = \begin{bmatrix} -1/5 & -2/5 & -1/5 \\ -2/5 & -3/5 & -1/5 \\ -1/5 & -1/5 & -1 \end{bmatrix}$.
Since LHS = RHS, the property [adj A]–1 = adj (A–1) is verified.
Part (ii): Verify (A–1)–1 = A
Let B = A–1 = $\frac{1}{-5} \begin{bmatrix} 14 & -9 & -1 \\ -9 & 4 & 1 \\ -1 & 1 & -1 \end{bmatrix}$. We need to find B–1.
We use the formula $B^{-1} = \frac{1}{|B|} \text{adj B}$.
$|B| = |A^{-1}| = \frac{1}{|A|} = \frac{1}{-5}$.
adj B = adj(A–1). From Part (i), we found:
adj (A–1) = $\frac{1}{25} \text{adj(adj A)} = \frac{1}{25} (-5A) = -\frac{1}{5}A$.
Now, substitute these into the formula for B–1:
$B^{-1} = (A^{-1})^{-1} = \frac{1}{1/(-5)} \left(-\frac{1}{5}A\right)$
$(A^{-1})^{-1} = -5 \left(-\frac{1}{5}A\right) = (-5 \cdot -\frac{1}{5}) A = 1 \cdot A = A$.
Since the result is A, the property (A–1)–1 = A is verified.
Question 9. Evaluate $\begin{vmatrix} x&y&x+y\\y&x+y&x\\x+y&x&y \end{vmatrix}$
Answer:
Let the given determinant be $D$.
$D = \begin{vmatrix} x&y&x+y\\y&x+y&x\\x+y&x&y \end{vmatrix}$
Solution:
Apply the row operation $R_1 \to R_1 + R_2 + R_3$:
$D = \begin{vmatrix} x+y+(x+y)&y+(x+y)+x&(x+y)+x+y\\y&x+y&x\\x+y&x&y \end{vmatrix}$
$D = \begin{vmatrix} 2x+2y&2x+2y&2x+2y\\y&x+y&x\\x+y&x&y \end{vmatrix}$
Take out the common factor $(2x+2y)$ from $R_1$.
$D = (2x+2y) \begin{vmatrix} 1&1&1\\y&x+y&x\\x+y&x&y \end{vmatrix}$
$D = 2(x+y) \begin{vmatrix} 1&1&1\\y&x+y&x\\x+y&x&y \end{vmatrix}$
Apply the column operations $C_2 \to C_2 - C_1$ and $C_3 \to C_3 - C_1$:
$D = 2(x+y) \begin{vmatrix} 1&1-1&1-1\\y&(x+y)-y&x-y\\x+y&x-(x+y)&y-(x+y) \end{vmatrix}$
$D = 2(x+y) \begin{vmatrix} 1&0&0\\y&x&x-y\\x+y&-y&-x \end{vmatrix}$
Expand the determinant along the first row ($R_1$):
$D = 2(x+y) \left[ 1 \cdot \begin{vmatrix} x&x-y\\-y&-x \end{vmatrix} - 0 + 0 \right]$
$D = 2(x+y) [ (x)(-x) - (x-y)(-y) ]$
$D = 2(x+y) [ -x^2 - (-xy + y^2) ]$
$D = 2(x+y) [ -x^2 + xy - y^2 ]$
Rearrange the terms inside the bracket:
$D = 2(x+y) [ -(x^2 - xy + y^2) ]$
$D = -2(x+y)(x^2 - xy + y^2)$
Recall the sum of cubes formula: $a^3 + b^3 = (a+b)(a^2 - ab + b^2)$.
The expression inside the bracket $x^2 - xy + y^2$ is part of this formula.
So, $(x+y)(x^2 - xy + y^2) = x^3 + y^3$.
$D = -2(x^3 + y^3)$
The value of the determinant is $\mathbf{-2(x^3 + y^3)}$.
Question 10. Evaluate $\begin{vmatrix} 1&x&y\\1&x+y&y\\1&x&x+y \end{vmatrix}$
Answer:
Let the given determinant be $D$.
$D = \begin{vmatrix} 1&x&y\\1&x+y&y\\1&x&x+y \end{vmatrix}$
Solution:
We can simplify the determinant by applying row operations to create zeros in the first column.
Apply the operation $R_2 \to R_2 - R_1$:
$D = \begin{vmatrix} 1&x&y\\1-1&(x+y)-x&y-y\\1&x&x+y \end{vmatrix}$
$D = \begin{vmatrix} 1&x&y\\0&y&0\\1&x&x+y \end{vmatrix}$
Apply the operation $R_3 \to R_3 - R_1$:
$D = \begin{vmatrix} 1&x&y\\0&y&0\\1-1&x-x&(x+y)-y \end{vmatrix}$
$D = \begin{vmatrix} 1&x&y\\0&y&0\\0&0&x \end{vmatrix}$
Now, the determinant is in upper triangular form. The value of an upper triangular determinant is the product of its diagonal elements.
Alternatively, we can expand along the first column ($C_1$) since it has two zeros:
$D = 1 \cdot \begin{vmatrix} y&0\\0&x \end{vmatrix} - 0 \cdot \begin{vmatrix} x&y\\0&x \end{vmatrix} + 0 \cdot \begin{vmatrix} x&y\\y&0 \end{vmatrix}$
$D = 1 \cdot ((y)(x) - (0)(0))$
$D = yx - 0$
$D = xy$
The value of the determinant is $\mathbf{xy}$.
Using properties of determinants in Exercises 11 to 15, prove that:
Question 11. $\begin{vmatrix} α&α^2&β+γ\\β&β^2&γ+α\\γ&γ^2&α+β \end{vmatrix}$ = (β – γ) (γ – α) (α – β) (α + β + γ)
Answer:
To Prove:
$\begin{vmatrix} α&α^2&β+γ\\β&β^2&γ+α\\γ&γ^2&α+β \end{vmatrix} = (\beta – \gamma) (\gamma – \alpha) (\alpha – \beta) (\alpha + \beta + \gamma)$
Proof:
Consider the Left Hand Side (LHS) determinant:
$LHS = \begin{vmatrix} α&α^2&β+γ\\β&β^2&γ+α\\γ&γ^2&α+β \end{vmatrix}$
Apply the column operation $C_3 \to C_3 + C_1$:
$LHS = \begin{vmatrix} α&α^2&β+γ+α\\β&β^2&γ+α+β\\γ&γ^2&α+β+γ \end{vmatrix}$
$LHS = \begin{vmatrix} α&α^2&α+β+γ\\β&β^2&α+β+γ\\γ&γ^2&α+β+γ \end{vmatrix}$
Take out the common factor $(\alpha+\beta+\gamma)$ from $C_3$:
$LHS = (\alpha+\beta+\gamma) \begin{vmatrix} α&α^2&1\\β&β^2&1\\γ&γ^2&1 \end{vmatrix}$
Apply the row operation $R_2 \to R_2 - R_1$ and $R_3 \to R_3 - R_1$:
$LHS = (\alpha+\beta+\gamma) \begin{vmatrix} α&α^2&1\\β-α&β^2-α^2&1-1\\γ-α&γ^2-α^2&1-1 \end{vmatrix}$
$LHS = (\alpha+\beta+\gamma) \begin{vmatrix} α&α^2&1\\β-α&(β-α)(β+α)&0\\γ-α&(γ-α)(γ+α)&0 \end{vmatrix}$
Expand the determinant along the third column ($C_3$), as it has two zeros:
$LHS = (\alpha+\beta+\gamma) \left[ 1 \cdot \begin{vmatrix} β-α&(β-α)(β+α)\\γ-α&(γ-α)(γ+α) \end{vmatrix} - 0 + 0 \right]$
$LHS = (\alpha+\beta+\gamma) \begin{vmatrix} β-α&(β-α)(β+α)\\γ-α&(γ-α)(γ+α) \end{vmatrix}$
Take out the common factor $(\beta-\alpha)$ from $R_1$ and $(\gamma-\alpha)$ from $R_2$ of the $2 \times 2$ determinant:
$LHS = (\alpha+\beta+\gamma) (\beta-\alpha) (\gamma-\alpha) \begin{vmatrix} 1&β+α\\1&γ+α \end{vmatrix}$
Evaluate the $2 \times 2$ determinant:
$\begin{vmatrix} 1&β+α\\1&γ+α \end{vmatrix} = 1(γ+α) - 1(β+α) = γ+α - β-α = γ - β$
Substitute this value back into the expression for LHS:
$LHS = (\alpha+\beta+\gamma) (\beta-\alpha) (\gamma-\alpha) (γ - β)$
We need to rearrange the terms to match the RHS $(\beta – \gamma) (γ – α) (α – β) (α + β + γ)$.
Notice that $(\beta-\alpha) = -(\alpha-\beta)$ and $(\gamma-\beta) = -(\beta-\gamma)$.
$LHS = (\alpha+\beta+\gamma) (-(\alpha-\beta)) (\gamma-\alpha) (-(\beta-\gamma))$
$LHS = (-1)(-1) (\alpha+\beta+\gamma) (\alpha-\beta) (\gamma-\alpha) (\beta-\gamma)$
$LHS = 1 \cdot (\alpha+\beta+\gamma) (\alpha-\beta) (\gamma-\alpha) (\beta-\gamma)$
$LHS = (\beta-\gamma) (\gamma-\alpha) (\alpha-\beta) (\alpha+\beta+\gamma)$
This is equal to the Right Hand Side (RHS).
Therefore, $\begin{vmatrix} α&α^2&β+γ\\β&β^2&γ+α\\γ&γ^2&α+β \end{vmatrix} = (\beta – γ) (γ – α) (α – β) (α + β + γ)$.
Hence, Proved.
Question 12. $\begin{vmatrix} x&x^2&1+px^3\\y&y^2&1+py^3\\z&z^2&1+pz^2 \end{vmatrix}$ = (1 + pxyz) (x – y) (y – z) (z – x), where p is any scalar.
Answer:
Let the given determinant be $D$. We are asked to prove that $D = (1 + pxyz) (x – y) (y – z) (z – x)$.
$D = \begin{vmatrix} x&x^2&1+px^3\\y&y^2&1+py^3\\z&z^2&1+pz^2 \end{vmatrix}$
Using the property that if elements of any column (or row) of a determinant are expressed as the sum of two or more terms, then the determinant can be expressed as the sum of two or more determinants. We split the determinant based on the third column ($C_3$):
$D = \begin{vmatrix} x&x^2&1\\y&y^2&1\\z&z^2&1 \end{vmatrix} + \begin{vmatrix} x&x^2&px^3\\y&y^2&py^3\\z&z^2&pz^3 \end{vmatrix}$
Let $D_1 = \begin{vmatrix} x&x^2&1\\y&y^2&1\\z&z^2&1 \end{vmatrix}$ and $D_2 = \begin{vmatrix} x&x^2&px^3\\y&y^2&py^3\\z&z^2&pz^3 \end{vmatrix}$.
Consider $D_1$:
$D_1 = \begin{vmatrix} x&x^2&1\\y&y^2&1\\z&z^2&1 \end{vmatrix}$
This is a form of the Vandermonde determinant. By swapping columns $C_1 \leftrightarrow C_3$ and then $C_2 \leftrightarrow C_3$, we get the standard form, noting that each swap introduces a factor of $-1$:
$D_1 = (-1) \begin{vmatrix} 1&x^2&x\\1&y^2&y\\1&z^2&z \end{vmatrix} = (-1)(-1) \begin{vmatrix} 1&x&x^2\\1&y&y^2\\1&z&z^2 \end{vmatrix} = \begin{vmatrix} 1&x&x^2\\1&y&y^2\\1&z&z^2 \end{vmatrix}$
The value of the standard Vandermonde determinant $\begin{vmatrix} 1&a&a^2\\1&b&b^2\\1&c&c^2 \end{vmatrix}$ is $(b-a)(c-a)(c-b)$.
Applying this formula to $D_1$:
$D_1 = (y-x)(z-x)(z-y)$
We can rewrite this in terms of $(x-y)$, $(y-z)$, and $(z-x)$:
$D_1 = (-(x-y)) (z-x) (-(y-z)) = (x-y)(y-z)(z-x)$
Now consider $D_2$:
$D_2 = \begin{vmatrix} x&x^2&px^3\\y&y^2&py^3\\z&z^2&pz^3 \end{vmatrix}$
Factor out $p$ from the third column ($C_3$):
$D_2 = p \begin{vmatrix} x&x^2&x^3\\y&y^2&y^3\\z&z^2&z^3 \end{vmatrix}$
Now, factor out $x$ from the first row ($R_1$), $y$ from the second row ($R_2$), and $z$ from the third row ($R_3$):
$D_2 = pxyz \begin{vmatrix} 1&x&x^2\\1&y&y^2\\1&z&z^2 \end{vmatrix}$
The determinant $\begin{vmatrix} 1&x&x^2\\1&y&y^2\\1&z&z^2 \end{vmatrix}$ is the standard Vandermonde determinant, which equals $(y-x)(z-x)(z-y)$.
So, $D_2 = pxyz (y-x)(z-x)(z-y)$.
Rewriting the factors:
$D_2 = pxyz (-(x-y)) (z-x) (-(y-z)) = pxyz (x-y)(y-z) $$ (z-x)$
The original determinant $D$ is the sum of $D_1$ and $D_2$:
$D = D_1 + D_2$
$D = (x-y)(y-z)(z-x) + pxyz (x-y)(y-z)(z-x)$
Factor out the common term $(x-y)(y-z)(z-x)$:
$D = \left(1 + pxyz\right) (x-y)(y-z)(z-x)$
This is the required expression.
Hence, it is proved that $\begin{vmatrix} x&x^2&1+px^3\\y&y^2&1+py^3\\z&z^2&1+pz^2 \end{vmatrix}$ = (1 + pxyz) (x – y) (y – z) (z – x).
Question 13. $\begin{vmatrix} 3a&−a+b&−a+c\\−b+a&3b&−b+c\\−c+a&−c+b&3c \end{vmatrix} = 3(a + b + c) $$ (ab + bc + ca)$
Answer:
Given:
The determinant $\begin{vmatrix} 3a&−a+b&−a+c\\−b+a&3b&−b+c\\−c+a&−c+b&3c \end{vmatrix}$
To Prove:
$\begin{vmatrix} 3a&−a+b&−a+c\\−b+a&3b&−b+c\\−c+a&−c+b&3c \end{vmatrix} = 3(a + b + c) (ab + bc + ca)$
Proof:
Let the given determinant be $D$.
$D = \begin{vmatrix} 3a&−a+b&−a+c\\−b+a&3b&−b+c\\−c+a&−c+b&3c \end{vmatrix}$
Apply the column operation $C_1 \to C_1 + C_2 + C_3$:
The elements of the new first column are:
First element: $3a + (-a+b) + (-a+c) $$ = 3a - a + b - a + c $$ = a+b+c$
Second element: $(-b+a) + 3b + (-b+c) $$ = -b + a + 3b - b + c $$ = a+b+c$
Third element: $(-c+a) + (-c+b) + 3c $$ = -c + a - c + b + 3c $$ = a+b+c$
So, the determinant becomes:
$D = \begin{vmatrix} a+b+c&−a+b&−a+c\\a+b+c&3b&−b+c\\a+b+c&−c+b&3c \end{vmatrix}$
Factor out $(a+b+c)$ from the first column ($C_1$):
$D = (a+b+c) \begin{vmatrix} 1&−a+b&−a+c\\1&3b&−b+c\\1&−c+b&3c \end{vmatrix}$
Apply the row operations $R_2 \to R_2 - R_1$ and $R_3 \to R_3 - R_1$ to create zeros in the first column:
For $R_2 \to R_2 - R_1$:
First element: $1 - 1 = 0$
Second element: $3b - (-a+b) = 3b + a - b = a+2b$
Third element: $(-b+c) - (-a+c) = -b+c+a-c = a-b$
For $R_3 \to R_3 - R_1$:
First element: $1 - 1 = 0$
Second element: $(-c+b) - (-a+b) = -c+b+a-b = a-c$
Third element: $3c - (-a+c) = 3c+a-c = a+2c$
The determinant is now:
$D = (a+b+c) \begin{vmatrix} 1&−a+b&−a+c\\0&a+2b&a-b\\0&a-c&a+2c \end{vmatrix}$
Expand the determinant along the first column ($C_1$):
$D = (a+b+c) \times 1 \times \begin{vmatrix} a+2b&a-b\\a-c&a+2c \end{vmatrix} - 0 + 0$
Evaluate the $2 \times 2$ determinant:
$\begin{vmatrix} a+2b&a-b\\a-c&a+2c \end{vmatrix} = (a+2b)(a+2c) - (a-b)(a-c)$
$= (a^2 + 2ac + 2ab + 4bc) - (a^2 - ac - ab + bc)$
$= a^2 + 2ac + 2ab + 4bc - a^2 + ac + ab - bc$
$= (a^2 - a^2) + (2ac + ac) + (2ab + ab) + (4bc - bc)$
$= 0 + 3ac + 3ab + 3bc$
$= 3ab + 3bc + 3ca$
$= 3(ab + bc + ca)$
Substitute this result back into the expression for $D$:
$D = (a+b+c) \times 3(ab + bc + ca)$
$D = 3(a+b+c)(ab + bc + ca)$
This matches the right-hand side of the given equation.
Hence, it is proved that $\begin{vmatrix} 3a&−a+b&−a+c\\−b+a&3b&−b+c\\−c+a&−c+b&3c \end{vmatrix} $$ = 3(a + b + c) $$ (ab + bc + ca)$.
Question 14. $\begin{vmatrix} 1 &1+p&1+p+q\\2&3+2p&4+3p+2q\\3&6+3p&10+6p+3q \end{vmatrix} = 1$
Answer:
Given:
The determinant $\begin{vmatrix} 1 &1+p&1+p+q\\2&3+2p&4+3p+2q\\3&6+3p&10+6p+3q \end{vmatrix}$
To Prove:
$\begin{vmatrix} 1 &1+p&1+p+q\\2&3+2p&4+3p+2q\\3&6+3p&10+6p+3q \end{vmatrix} = 1$
Proof:
Let the given determinant be $D$.
$D = \begin{vmatrix} 1 &1+p&1+p+q\\2&3+2p&4+3p+2q\\3&6+3p&10+6p+3q \end{vmatrix}$
Apply the row operation $R_2 \to R_2 - 2R_1$. The new elements in the second row will be:
$2 - 2(1) = 0$
$(3+2p) - 2(1+p) = 3+2p - 2 - 2p = 1$
$(4+3p+2q) - 2(1+p+q) = 4+3p+2q - 2 - 2p - 2q = 2+p$
So the determinant becomes:
$D = \begin{vmatrix} 1 &1+p&1+p+q\\0&1&2+p\\3&6+3p&10+6p+3q \end{vmatrix}$
Now, apply the row operation $R_3 \to R_3 - 3R_1$. The new elements in the third row will be:
$3 - 3(1) = 0$
$(6+3p) - 3(1+p) = 6+3p - 3 - 3p = 3$
$(10+6p+3q) - 3(1+p+q) = 10+6p+3q - 3 - 3p - 3q $$ = 7+3p$
The determinant is now:
$D = \begin{vmatrix} 1 &1+p&1+p+q\\0&1&2+p\\0&3&7+3p \end{vmatrix}$
Expand the determinant along the first column ($C_1$), as it contains two zeros. The expansion is $1 \times (\text{minor of element at } R_1, C_1) $$ - 0 \times (\text{minor}) + 0 \times (\text{minor})$.
$D = 1 \times \begin{vmatrix} 1&2+p\\3&7+3p \end{vmatrix}$
Now, evaluate the $2 \times 2$ determinant:
$\begin{vmatrix} 1&2+p\\3&7+3p \end{vmatrix} = (1)(7+3p) - (3)(2+p)$
$= (7+3p) - (6+3p)$
$= 7+3p - 6 - 3p$
$= (7-6) + (3p-3p)$
$= 1 + 0$
$= 1$
So, the value of the determinant $D$ is $1$.
$D = 1$
This matches the right-hand side of the equation we were asked to prove.
Hence, it is proved that $\begin{vmatrix} 1 &1+p&1+p+q\\2&3+2p&4+3p+2q\\3&6+3p&10+6p+3q \end{vmatrix} = 1$.
Question 15. $\begin{vmatrix} \sinα&\cosα&\cos(α+δ)\\\sinβ&\cosβ&\cos(β+δ)\\\sinγ&\cosγ&\cos(γ+δ) \end{vmatrix} = 0$
Answer:
Given:
The determinant $\begin{vmatrix} \sinα&\cosα&\cos(α+δ)\\\sinβ&\cosβ&\cos(β+δ)\\\sinγ&\cosγ&\cos(γ+δ) \end{vmatrix}$
To Prove:
$\begin{vmatrix} \sinα&\cosα&\cos(α+δ)\\\sinβ&\cosβ&\cos(β+δ)\\\sinγ&\cosγ&\cos(γ+δ) \end{vmatrix} = 0$
Proof:
Let the given determinant be $D$.
$D = \begin{vmatrix} \sinα&\cosα&\cos(α+δ)\\\sinβ&\cosβ&\cos(β+δ)\\\sinγ&\cosγ&\cos(γ+δ) \end{vmatrix}$
Using the trigonometric identity $\cos(A+B) = \cos A \cos B - \sin A \sin B$, we expand the elements in the third column ($C_3$).
The elements are:
$\cos(α+δ) = \cosα \cosδ - \sinα \sinδ$
$\cos(β+δ) = \cosβ \cosδ - \sinβ \sinδ$
$\cos(γ+δ) = \cosγ \cosδ - \sinγ \sinδ$
So the determinant becomes:
$D = \begin{vmatrix} \sinα&\cosα&\cosα \cosδ - \sinα \sinδ\\\sinβ&\cosβ&\cosβ \cosδ - \sinβ \sinδ\\\sinγ&\cosγ&\cosγ \cosδ - \sinγ \sinδ \end{vmatrix}$
Using the property that if elements of any column (or row) of a determinant are expressed as the sum of two or more terms, then the determinant can be expressed as the sum of two or more determinants. We split the determinant based on the third column ($C_3$):
$D = \begin{vmatrix} \sinα&\cosα&\cosα \cosδ\\\sinβ&\cosβ&\cosβ \cosδ\\\sinγ&\cosγ&\cosγ \cosδ \end{vmatrix} + \begin{vmatrix} \sinα&\cosα&- \sinα \sinδ\\\sinβ&\cosβ&- \sinβ \sinδ\\\sinγ&\cosγ&- \sinγ \sinδ \end{vmatrix}$
Note the minus sign in the second determinant which comes from the expansion of $\cos(A+B)$.
Let's consider the first determinant: $\begin{vmatrix} \sinα&\cosα&\cosα \cosδ\\\sinβ&\cosβ&\cosβ \cosδ\\\sinγ&\cosγ&\cosγ \cosδ \end{vmatrix}$.
Factor out $\cosδ$ from the third column ($C_3$):
$\cosδ \begin{vmatrix} \sinα&\cosα&\cosα\\\sinβ&\cosβ&\cosβ\\\sinγ&\cosγ&\cosγ \end{vmatrix}$
In this determinant, the second column ($C_2$) and the third column ($C_3$) are identical. If any two columns (or rows) of a determinant are identical, the value of the determinant is zero.
So, $\begin{vmatrix} \sinα&\cosα&\cosα\\\sinβ&\cosβ&\cosβ\\\sinγ&\cosγ&\cosγ \end{vmatrix} = 0$.
Thus, the first part of the sum is $\cosδ \times 0 = 0$.
Now, let's consider the second determinant: $\begin{vmatrix} \sinα&\cosα&- \sinα \sinδ\\\sinβ&\cosβ&- \sinβ \sinδ\\\sinγ&\cosγ&- \sinγ \sinδ \end{vmatrix}$.
Factor out $- \sinδ$ from the third column ($C_3$):
$- \sinδ \begin{vmatrix} \sinα&\cosα&\sinα\\\sinβ&\cosβ&\sinβ\\\sinγ&\cosγ&\sinγ \end{vmatrix}$
In this determinant, the first column ($C_1$) and the third column ($C_3$) are identical. Therefore, its value is zero.
So, $\begin{vmatrix} \sinα&\cosα&\sinα\\\sinβ&\cosβ&\sinβ\\\sinγ&\cosγ&\sinγ \end{vmatrix} = 0$.
Thus, the second part of the sum is $- \sinδ \times 0 = 0$.
Adding the values of the two determinants:
$D = 0 + 0 = 0$
Thus, we have shown that the value of the determinant is $0$.
Hence, it is proved that $\begin{vmatrix} \sinα&\cosα&\cos(α+δ)\\\sinβ&\cosβ&\cos(β+δ)\\\sinγ&\cosγ&\cos(γ+δ) \end{vmatrix} = 0$.
Question 16. Solve the system of equations
$\frac{2}{x}$ + $\frac{3}{y}$ + $\frac{10}{z}$ = 4
$\frac{4}{x}$ - $\frac{6}{y}$ + $\frac{5}{z}$ = 1
$\frac{6}{x}$ + $\frac{9}{y}$ - $\frac{20}{z}$ = 2
Answer:
Given:
The system of equations:
$\frac{2}{x}$ + $\frac{3}{y}$ + $\frac{10}{z}$ = 4
$\frac{4}{x}$ - $\frac{6}{y}$ + $\frac{5}{z}$ = 1
$\frac{6}{x}$ + $\frac{9}{y}$ - $\frac{20}{z}$ = 2
To Find:
The values of $x$, $y$, and $z$ that satisfy the given system of equations.
Solution:
The given equations involve the reciprocals of $x$, $y$, and $z$. Let's make a substitution to convert this into a linear system.
Let $u = \frac{1}{x}$, $v = \frac{1}{y}$, and $w = \frac{1}{z}$.
Substituting these into the given equations, we get a linear system in terms of $u$, $v$, and $w$:
$2u + 3v + 10w = 4$
$4u - 6v + 5w = 1$
$6u + 9v - 20w = 2$
We can solve this linear system using Cramer's rule. First, we write the coefficient matrix $A$ and the constant vector $B$:
$A = \begin{pmatrix} 2 & 3 & 10 \\ 4 & -6 & 5 \\ 6 & 9 & -20 \end{pmatrix}$
$B = \begin{pmatrix} 4 \\ 1 \\ 2 \end{pmatrix}$
Calculate the determinant of the coefficient matrix, $D = \det(A)$:
$D = \begin{vmatrix} 2 & 3 & 10 \\ 4 & -6 & 5 \\ 6 & 9 & -20 \end{vmatrix}$
Expand along the first row:
$D = 2 \begin{vmatrix} -6 & 5 \\ 9 & -20 \end{vmatrix} - 3 \begin{vmatrix} 4 & 5 \\ 6 & -20 \end{vmatrix} + 10 \begin{vmatrix} 4 & -6 \\ 6 & 9 \end{vmatrix}$
$D = 2((-6)(-20) - (5)(9)) - 3((4)(-20) - (5)(6)) $$ + 10((4)(9) - (-6)(6))$
$D = 2(120 - 45) - 3(-80 - 30) + 10(36 + 36)$
$D = 2(75) - 3(-110) + 10(72)$
$D = 150 + 330 + 720$
$D = 1200$
Since $D \neq 0$, the system has a unique solution.
Calculate $D_u$ by replacing the first column of $A$ with the constant vector $B$:
$D_u = \begin{vmatrix} 4 & 3 & 10 \\ 1 & -6 & 5 \\ 2 & 9 & -20 \end{vmatrix}$
Expand along the first row:
$D_u = 4 \begin{vmatrix} -6 & 5 \\ 9 & -20 \end{vmatrix} - 3 \begin{vmatrix} 1 & 5 \\ 2 & -20 \end{vmatrix} + 10 \begin{vmatrix} 1 & -6 \\ 2 & 9 \end{vmatrix}$
$D_u = 4(120 - 45) - 3(-20 - 10) + 10(9 + 12)$
$D_u = 4(75) - 3(-30) + 10(21)$
$D_u = 300 + 90 + 210$
$D_u = 600$
Calculate $D_v$ by replacing the second column of $A$ with the constant vector $B$:
$D_v = \begin{vmatrix} 2 & 4 & 10 \\ 4 & 1 & 5 \\ 6 & 2 & -20 \end{vmatrix}$
Expand along the first row:
$D_v = 2 \begin{vmatrix} 1 & 5 \\ 2 & -20 \end{vmatrix} - 4 \begin{vmatrix} 4 & 5 \\ 6 & -20 \end{vmatrix} + 10 \begin{vmatrix} 4 & 1 \\ 6 & 2 \end{vmatrix}$
$D_v = 2(-20 - 10) - 4(-80 - 30) + 10(8 - 6)$
$D_v = 2(-30) - 4(-110) + 10(2)$
$D_v = -60 + 440 + 20$
$D_v = 400$
Calculate $D_w$ by replacing the third column of $A$ with the constant vector $B$:
$D_w = \begin{vmatrix} 2 & 3 & 4 \\ 4 & -6 & 1 \\ 6 & 9 & 2 \end{vmatrix}$
Expand along the first row:
$D_w = 2 \begin{vmatrix} -6 & 1 \\ 9 & 2 \end{vmatrix} - 3 \begin{vmatrix} 4 & 1 \\ 6 & 2 \end{vmatrix} + 4 \begin{vmatrix} 4 & -6 \\ 6 & 9 \end{vmatrix}$
$D_w = 2(-12 - 9) - 3(8 - 6) + 4(36 + 36)$
$D_w = 2(-21) - 3(2) + 4(72)$
$D_w = -42 - 6 + 288$
$D_w = 240$
Now, we find the values of $u$, $v$, and $w$ using Cramer's rule:
$u = \frac{D_u}{D} = \frac{600}{1200} = \frac{1}{2}$
$v = \frac{D_v}{D} = \frac{400}{1200} = \frac{1}{3}$
$w = \frac{D_w}{D} = \frac{240}{1200} = \frac{1}{5}$
Finally, substitute back to find $x$, $y$, and $z$ using the original substitutions $u = \frac{1}{x}$, $v = \frac{1}{y}$, and $w = \frac{1}{z}$:
$\frac{1}{x} = u \implies \frac{1}{x} = \frac{1}{2} \implies x = 2$
$\frac{1}{y} = v \implies \frac{1}{y} = \frac{1}{3} \implies y = 3$
$\frac{1}{z} = w \implies \frac{1}{z} = \frac{1}{5} \implies z = 5$
The solution to the system of equations is $x=2$, $y=3$, and $z=5$.
Verification:
Substitute the values into the original equations:
Eq 1: $\frac{2}{2} + \frac{3}{3} + \frac{10}{5} = 1 + 1 + 2 = 4$ (Holds)
Eq 2: $\frac{4}{2} - \frac{6}{3} + \frac{5}{5} = 2 - 2 + 1 = 1$ (Holds)
Eq 3: $\frac{6}{2} + \frac{9}{3} - \frac{20}{5} = 3 + 3 - 4 = 2$ (Holds)
The solution is verified.
Choose the correct answer in Exercise 17 to 19.
Question 17. If a, b, c, are in A.P, then the determinant
$\begin{vmatrix} x+2&x+3&x+2a\\x+3&x+4&x+2b\\x+4&x+5&x+2c \end{vmatrix}$ is
(A) 0
(B) 1
(C) x
(D) 2x
Answer:
Given:
The determinant $\begin{vmatrix} x+2&x+3&x+2a\\x+3&x+4&x+2b\\x+4&x+5&x+2c \end{vmatrix}$.
The numbers $a$, $b$, and $c$ are in Arithmetic Progression (A.P.).
To Find:
The value of the given determinant.
Solution:
Let the given determinant be $D$.
$D = \begin{vmatrix} x+2&x+3&x+2a\\x+3&x+4&x+2b\\x+4&x+5&x+2c \end{vmatrix}$
Since $a$, $b$, and $c$ are in A.P., the difference between consecutive terms is constant. This means:
$b - a = c - b$
(Property of A.P.)
This equality implies $2b = a+c$. Also, from $b-a = c-b$, we have $2(b-a) = 2(c-b)$.
Apply the row operations $R_2 \to R_2 - R_1$ and $R_3 \to R_3 - R_2$ to simplify the determinant.
For the new second row ($R_2'$):
- First element: $(x+3) - (x+2) = 1$
- Second element: $(x+4) - (x+3) = 1$
- Third element: $(x+2b) - (x+2a) = 2b - 2a = 2(b-a)$
For the new third row ($R_3'$):
- First element: $(x+4) - (x+3) = 1$
- Second element: $(x+5) - (x+4) = 1$
- Third element: $(x+2c) - (x+2b) = 2c - 2b = 2(c-b)$
The determinant becomes:
$D = \begin{vmatrix} x+2&x+3&x+2a\\1&1&2(b-a)\\1&1&2(c-b) \end{vmatrix}$
As $a, b, c$ are in A.P., we know that $b-a = c-b$. Let this common difference be $d$, so $b-a = d$ and $c-b = d$.
Substitute this into the determinant:
$D = \begin{vmatrix} x+2&x+3&x+2a\\1&1&2d\\1&1&2d \end{vmatrix}$
Observe that the second row ($R_2$) and the third row ($R_3$) of this determinant are identical.
A fundamental property of determinants states that if any two rows (or columns) of a determinant are identical, the value of the determinant is zero.
Therefore, $D = 0$.
The value of the determinant is $0$. This corresponds to option (A).
The final answer is (A) 0.
Question 18. If x, y, z are nonzero real numbers, then the inverse of matrix A = $\begin{bmatrix}x&0&0\\0&y&0\\0&0&z \end{bmatrix}$ is
(A) $\begin{bmatrix}x^{−1}&0&0\\0&y^{−1}&0\\0&0&z^{−1} \end{bmatrix}$
(B) $xyz \begin{bmatrix}x^{−1}&0&0\\0&y^{−1}&0\\0&0&z^{−1} \end{bmatrix}$
(C) $\frac{1}{xyz} \begin{bmatrix}x&0&0\\0&y&0\\0&0&z \end{bmatrix}$
(D) $\frac{1}{xyz} \begin{bmatrix}1&0&0\\0&1&0\\0&0&1 \end{bmatrix}$
Answer:
Given:
The matrix $A = \begin{bmatrix}x&0&0\\0&y&0\\0&0&z \end{bmatrix}$, where $x, y, z$ are nonzero real numbers.
To Find:
The inverse of matrix $A$, denoted as $A^{-1}$.
Solution:
The inverse of a square matrix $A$ exists if and only if its determinant is non-zero. The formula for the inverse is given by $A^{-1} = \frac{1}{\det(A)} \text{adj}(A)$, where $\det(A)$ is the determinant of $A$ and $\text{adj}(A)$ is the adjoint of $A$.
First, calculate the determinant of matrix $A$:
$\det(A) = \begin{vmatrix} x&0&0\\0&y&0\\0&0&z \end{vmatrix}$
Expanding along the first row:
$\det(A) = x \begin{vmatrix} y&0\\0&z \end{vmatrix} - 0 \begin{vmatrix} 0&0\\0&z \end{vmatrix} + 0 \begin{vmatrix} 0&y\\0&0 \end{vmatrix}$
$\det(A) = x(yz - 0) - 0 + 0 = xyz$
Since $x, y, z$ are nonzero, $xyz \neq 0$, so $\det(A) \neq 0$. Thus, the inverse of matrix $A$ exists.
Next, calculate the adjoint of matrix $A$. The adjoint is the transpose of the cofactor matrix. The cofactor $C_{ij}$ is given by $C_{ij} = (-1)^{i+j} M_{ij}$, where $M_{ij}$ is the minor of the element at position $(i, j)$.
The cofactor matrix of $A$ is:
$C_{11} = (-1)^{1+1} \begin{vmatrix} y&0\\0&z \end{vmatrix} = yz$
$C_{12} = (-1)^{1+2} \begin{vmatrix} 0&0\\0&z \end{vmatrix} = 0$
$C_{13} = (-1)^{1+3} \begin{vmatrix} 0&y\\0&0 \end{vmatrix} = 0$
$C_{21} = (-1)^{2+1} \begin{vmatrix} 0&0\\0&z \end{vmatrix} = 0$
$C_{22} = (-1)^{2+2} \begin{vmatrix} x&0\\0&z \end{vmatrix} = xz$
$C_{23} = (-1)^{2+3} \begin{vmatrix} x&0\\0&0 \end{vmatrix} = 0$
$C_{31} = (-1)^{3+1} \begin{vmatrix} 0&0\\y&0 \end{vmatrix} = 0$
$C_{32} = (-1)^{3+2} \begin{vmatrix} x&0\\0&0 \end{vmatrix} = 0$
$C_{33} = (-1)^{3+3} \begin{vmatrix} x&0\\0&y \end{vmatrix} = xy$
The cofactor matrix is $C = \begin{bmatrix} yz&0&0\\0&xz&0\\0&0&xy \end{bmatrix}$.
The adjoint matrix is $\text{adj}(A) = C^T = \begin{bmatrix} yz&0&0\\0&xz&0\\0&0&xy \end{bmatrix}$.
Now, calculate the inverse $A^{-1} = \frac{1}{\det(A)} \text{adj}(A)$:
$A^{-1} = \frac{1}{xyz} \begin{bmatrix} yz&0&0\\0&xz&0\\0&0&xy \end{bmatrix}$
Multiply each element by $\frac{1}{xyz}$:
$A^{-1} = \begin{bmatrix} \frac{yz}{xyz}&0&0\\0&\frac{xz}{xyz}&0\\0&0&\frac{xy}{xyz} \end{bmatrix} = \begin{bmatrix} \frac{1}{x}&0&0\\0&\frac{1}{y}&0\\0&0&\frac{1}{z} \end{bmatrix}$
Using the notation $x^{-1} = \frac{1}{x}$, $y^{-1} = \frac{1}{y}$, and $z^{-1} = \frac{1}{z}$, the inverse matrix is:
$A^{-1} = \begin{bmatrix} x^{-1}&0&0\\0&y^{-1}&0\\0&0&z^{-1} \end{bmatrix}$
Comparing this result with the given options, we see that it matches option (A).
The final answer is (A) $\begin{bmatrix}x^{−1}&0&0\\0&y^{−1}&0\\0&0&z^{−1} \end{bmatrix}$.
Question 19. Let A = $\begin{bmatrix}1&\sinθ&1\\−\sinθ&1&\sinθ\\−1&−\sinθ&1 \end{bmatrix}$ , where 0 ≤ θ ≤ 2π. Then
(A) Det(A) = 0
(B) Det(A) ∈ (2, ∞)
(C) Det(A) ∈ (2, 4)
(D) Det(A) ∈ [2, 4]
Answer:
Given:
The matrix $A = \begin{bmatrix}1&\sinθ&1\\−\sinθ&1&\sinθ\\−1&−\sinθ&1 \end{bmatrix}$.
The range of $\theta$ is $0 \leq θ \leq 2π$.
To Find:
The range of values for $\det(A)$.
Solution:
Let's calculate the determinant of matrix $A$. We expand the determinant along the first row ($R_1$).
$\det(A) = 1 \times \begin{vmatrix} 1&\sinθ\\−\sinθ&1 \end{vmatrix} - \sinθ \times \begin{vmatrix} −\sinθ&\sinθ\\−1&1 \end{vmatrix} + 1 \times \begin{vmatrix} −\sinθ&1\\−1&−\sinθ \end{vmatrix}$
Now, we evaluate the $2 \times 2$ determinants:
$\begin{vmatrix} 1&\sinθ\\−\sinθ&1 \end{vmatrix} = (1)(1) - (\sinθ)(-\sinθ) = 1 + \sin^2θ$
$\begin{vmatrix} −\sinθ&\sinθ\\−1&1 \end{vmatrix} = (-\sinθ)(1) - (\sinθ)(-1) = -\sinθ + \sinθ = 0$
$\begin{vmatrix} −\sinθ&1\\−1&−\sinθ \end{vmatrix} = (-\sinθ)(-\sinθ) - (1)(-1) = \sin^2θ + 1$
Substitute these values back into the expression for $\det(A)$:
$\det(A) = 1 \times (1 + \sin^2θ) - \sinθ \times (0) + 1 \times (\sin^2θ + 1)$
$\det(A) = (1 + \sin^2θ) - 0 + (\sin^2θ + 1)$
$\det(A) = 1 + \sin^2θ + \sin^2θ + 1$
$\det(A) = 2 + 2\sin^2θ$
We are given that $0 \leq θ \leq 2π$. For any real value of $\theta$, the value of $\sinθ$ is in the range $[-1, 1]$.
$-1 \leq \sinθ \leq 1$
Squaring the value of $\sinθ$, the range of $\sin^2θ$ is from $0$ (when $\sin\theta = 0$) to $1$ (when $\sin\theta = \pm 1$).
$0 \leq \sin^2θ \leq 1$
Now, we find the range of $2\sin^2θ$ by multiplying the inequality by 2:
$2 \times 0 \leq 2\sin^2θ \leq 2 \times 1$
$0 \leq 2\sin^2θ \leq 2$
Finally, we find the range of $\det(A) = 2 + 2\sin^2θ$ by adding 2 to all parts of the inequality:
$2 + 0 \leq 2 + 2\sin^2θ \leq 2 + 2$
$2 \leq \det(A) \leq 4$
So, the value of $\det(A)$ is in the closed interval $[2, 4]$.
Comparing this result with the given options:
- (A) $\det(A) = 0$ is incorrect.
- (B) $\det(A) ∈ (2, ∞)$ is incorrect, as the maximum value is 4.
- (C) $\det(A) ∈ (2, 4)$ is incorrect, as $\det(A)$ can be equal to 2 (when $\sinθ=0$) and 4 (when $\sin\theta=\pm 1$).
- (D) $\det(A) ∈ [2, 4]$ is correct, as the range includes both 2 and 4.
The final answer is (D) Det(A) ∈ [2, 4].