answersLogoWhite

0

Linear Algebra

Linear algebra is the detailed study of vector spaces. With applications in such disparate fields as sociology, economics, computer programming, chemistry, and physics, including its essential role in mathematically describing quantum mechanics and the theory of relativity, linear algebra has become one of the most essential mathematical disciplines for the modern world. Please direct all questions regarding matrices, determinants, eigenvalues, eigenvectors, and linear transformations into this category.

2,176 Questions

What is an example of a linear equation?

The slope-intercept form of a linear equation is

y = mx + b

where

m = slope and b = the y-intercept.

What is dot product?

In vector calculus a dot product of two vectors is basically the product of the length of one vector and the length of the parallel component of the other; It doesn't matter which one is taken first because length is a scalar and scalars are commutative. the easiest way to determine the dot product of u and v(u•v) is to simply multiply the length of each vector together and then multiply by the cosine of the angle between them (|uv|cosӨ, because length is a scalar, the product is always a scalar). You could also identify the the component of v that is parallel to u and and multiply their lengths but it's basically the same thing (|v|cosӨ|u|).

What is a linear system?

A linear system is an equation to find the intersection of two or more lines. The equations are usually expressed with two variables, x and y. I don't know yet, but maybe geometry might have three variables, including z. Basically it's where two lines intersect and the most common ways of solving it are through graphing, substitution, and/or elimination.
Presume you mean "linear".

These are systems whose parameters vary directly or proportionally. Plotting functions results in straight lines.

How do you find the inverse of A in system of linear equation and matrices?

First, we need to recall that a linear equation does not involve any products or roots of variables. All variables occur only to the first power and do not appear as arguments for trigonometric, logarithmic, or exponential functions. For example, x + √y = 4, y = sin x, and 2x + y - z + yz = 5 are not linear.

To solve a system of equations such as

3x + y = 5

2x - y = 3

all information required for the solution is emboded in the augmented matrix (imagine that I put those information into a rectangular arrays)

3 1 5

2 -1 3

and that the solution can be obtained by performing appropriate operations on this matrix.

The matrix of this system linear equations is a square matrix A such as

3 1

2 -1

Think this matrix as

a b

c d

To find an inverse of this square matrix A (2 x 2), we need to find a matrix B of the same size such that AB = I and BA = I, then A is said to be invertible and B is called the inverse of A. If no such a matrix can be found, then A is said to be singular.

An invertible matrix has exactly one inverse.

A square matrix A is invertible if ad - bc ≠ 0 (where ad - bc is the determinant)

The formula of finding the inverse of a square matrix A is

A-1 = [1/(ad - bc)][d -b the second row -c a](I'm sorry, I can't draw the arrays)

So let's find the inverse of our example.

A-1 = [1/(-3 -2)][-1 -1 the second row -2 3] = [-1/-5 -1/-5 the sec. row -2/-5 3/-5] =

1/5 1/5

2/5 -3/5

A n x m matrix cannot have an inverse. A n x n matrix may or may not have an inverse.

To find the inverse of a n x n matrix we should to adjoin the identity matrix to the right side of A, thereby producing a matrix of the form [A | I]. Then we should apply row opperations to this matrix until the left side is reduced to I. This opperations will convert the right side to A-1, so the final matrix will have the form [I |A-1 ].

(There are many other methods how to find the inverse of a n x n matrix, but I can't show them by examples. I am so sorry that I can't be so much useful to you).

What is the use of eigenvalues in daily life?

In N dimensions i.e. 3 space dimensions and 1 time dimension (our perceived universe) there is a way of collapsing all of the contain information (any nontrivial function i.e. x^4 + y^4 +Z^4 = any real number other than zero (Eigen function) at time being discrete towards 0 gives an informational representation of 3 dimensions in 4 dimensions after the first derivation and a 2 dimensional information representation of 4 dimensional space. After deriving any Eigen function the Eigen Values in a domain and range can be represented by a singularity. (i.e. the singularity that expanded "in the beginning) which is commonly known as the Big Bang theory).


In real life this can be useful in reducing large amounts of information into a "simplest" symbol. i.e. a 2 dimensional string (string theory) can be described by an Eigen Function. All the information of said string can be described in binary code i.e. 00111010101010110011101010 can be reduced by defining a string of binary code by a symbol say: q (where there can be an infinite amount of symbols)
Therefor a string of two dimensional information represented by symbols can then be reduced again by another symbol.

This can be done until all of the information in any space time of n dimensions is represented by a symbol defined by all of the information in said space time dimensions up to time t with complete regression assuming that information CAN be regressed with no error.

A real life example would be huge amounts of binary code is reduced and then transported to a television in which the simplified information is then expanded back to the original information which is then transformed by the television into a moving picture.

A more controversial assessment of a real life example in the field of Mathematics and Physics is that all of the information in our current space time (4-D) up to t=present can be represented by binary code at a snapshot t=plank time reduced to a symbol. The amount of plank times from the Big Bang to now can be reduced back down to the singularity from whence The Big Bang occurred. (This assumes that there is no error in the possible irreducible holonomies. The decomposition and classification of information can be represented by a singularity)

Would you show an example of math how to find the cross product?

For two vectors (A & B) in 3-space, using the (i j k) unit vector notation:

if A = a1*i + a2*j + a3*k, and B = b1*i + b2*j + b3*k the cross product A X B can be found by computing a determinant of the following matrix:

| i j k |

|a1 a2 a3 |

|b1 b2 b3 |

Mathematically, it will look like this: (a2*b3 - a3*b2)*i- (a1*b3 - a3*b1)*j + (a1*b2 - a2*b1)*k

I did do just a little copy/paste from the crossproduct website, which I've posted a link to, which has some good information.

What is idempotent matrix?

An idempotent matrix is a matrix which gives the same matrix if we multiply with the same.

in simple words,square of the matrix is equal to the same matrix.

if M is our matrix,then

MM=M.

then M is a idempotent matrix.

Which is the correct form for a linear equation?

The general form is for a linear equation in n variables is

SUM aixi = b (i = 1,2,3,...,n)

where xi are the variables and the ai are constant coefficients.

Benefits of Caley hamilton theorem in matrices?

The Cayley-Hamilton (not Caley hamilton) theorem allows powers of the matrix to be calculated more simply by using the characteristic function of the matrix. It can also provide a simple way to calculate the inverse matrix.

What are the first 15 perfect squares?

first 15 r:1, 4, 9, 16, 25, 36, 49, 64, 81, 100, 121, 144, 169, 196, and 225

What does scale mean in math?

A scale in Math is used in a chart.

For example, if you wanted to make a chart such as "Tickets Sold", a scale would be usually the number of tickets sold. It is also usually done in a pattern such as 1000, 2000, 3000, 4000, and so on.

Who is Christoff Rudolff?

He was a sixteenth century mathematician, born in Jawor (now in southwestern Poland) who wrote the first book on algebra in the German language. He studied at the University of Vienna. He invented the symbol we now use for square root, and also was the first to define the zero power to equal one.

What happens when Cramer's rule is applied to dependent or inconsistent systems?

We call the coefficients of the variables in the system D, and we need to find the determinant of D. When D = 0, the system is either inconsistent or dependent. You need another method to solve it.

What is the point at which the lines intersect in a system of linear equations?

The coordinates of the point of intersection represents the solution to the linear equations.

What is an independent system of linear equations?

An independent system of linear equations is a set of vectors in Rm, where any other vector in Rm can be written as a linear combination of all of the vectors in the set. The vector equation and the matrix equation can only have the trivial solution (x=0).

How are matrices useful in real life?

Matrices are one of the easiest things you learn in Algebra II but there is no point of the matrix after high school.

If A is any mxn such that AB and BA are both defined show that B is an mxn matrix?

By rule of matrix multiplication the number of rows in the first matrix must equal the number of rows in the second matrix. If A is an axb matrix and B is a cxd matrix, then a = d. Then if BA is defined, then c = b. This means that B is not necessarily mxn, but must be nxm.

How do you show a matrix is invertible?

For small matrices the simplest way is to show that its determinant is not zero.

How do you solve a linear inequalities?

to solve a linear in equality you have to write it out on a graph if the line or shape is made ou of strate lines its linear

What are the properties of a dot product?

In mathematics, the dot product is an algebraic operation that takes two equal-length sequences of numbers (usually vectors) and returns a single number obtained by multiplying corresponding entries and adding up those products. The name is derived from the interpunct "●" that is often used to designate this operation; the alternative name scalar product emphasizes the scalar result, rather than a vector result.

The principal use of this product is the inner product in a Euclidean vector space: when two vectors are expressed in an Orthonormal basis, the dot product of their coordinate vectors gives their inner product. For this geometric interpretation, scalars must be taken to be Real. The dot product can be defined in a more general field, for instance the complex number field, but many properties would be different. In three dimensional space, the dot product contrasts with the cross product, which produces a vector as result.