Let M be the subset of 2x2 matrices A such that det(A)=0 and tr(A'A)=4 (I shall use ' to denote transpose).

Recall that one of the three definitions of a k-dimensional manifold is:

M c R^n is a k-dimensional manifold if for any p in M there is a neighborhood W c R^n of p and a smooth function F:W-->R^n-k so that F^-1(0) = M intersection W and rank(DF(x))=n-k for every x in M intersection W.

In short, what we need to do is find a function F so that the inverse image of the zero vector under F gives M and the rank of the derivative of F is equal to the dimension of the codomain of F.

Let an arbitrary 2x2 matrix be written as:

[a c]

[b d]

Then the two constraints that define M are

1) ad-bc=0

2) a^2+b^2+c^2+d^2=norm(a,b,c,d)^2=4

Define F:R^4-->R^2 by F(a, b, c, d)=(ad-bc, norm(a,b,c,d)^2-4). Then clearly F^-1(0,0)=M. Furthermore, F is smooth on M because the multiplication and addition of smooth functions (a, b, c, d) is also smooth. (Note that we have taken the neighborhood W to be some superset of M. This guaranteed to exist because if we interpret the set of 2x2 matrices as R^4, every point in M has norm 2, so any ball centered at the origin with length greater than 2 will contain M).

All that remains to be done is to check that rank(DF(x))=2 for every x in M. Observe that [DF]= [d -c -b a]

[2a 2b 2c 2d]

We shall now argue by contradiction. Suppose rank(DF) did not equal 2 for every x in M. Then we know that the two rows are linearly dependent i.e.

h[d -c -b a] + k[2a 2b 2c 2d] = 0 and h and k are not both 0.

Suppose h is 0. Then we have 2ka = 2kb = 2kc = 2kd = 0, and since k cannot also be 0, this implies that a=b=c=d=0, therefore norm(a,b,c,d)^2=0. But, (a,b,c,d) must be in M, so this is a contradiction. Hence h cannot be 0.

Now suppose h is nonzero. Then we can divide it out and there exists a, b, c, d and a constant k so that

[d -c -b a] + k[2a 2b 2c 2d] = 0 i.e. we have:

d + 2ka = -c + 2kb = -b + 2kc = a + 2kd = 0. From this we can substitute to obtain:

a(1 - 4k^2) = b(4k^2 - 1) = c(4k^2 - 1) = d(1 - 4k^2) = 0 and hence

a^2(1 - 4k^2)^2 = b^2(4k^2 - 1)^2 = c^2(4k^2 - 1)^2 = d^2(1 - 4k^2)^2 = 0.

Note that (1 - 4k^2)^2 = (4k^2 - 1)^2. Now, adding the four above expressions together we get:

(a^2 + b^2 + c^2 + d^2)(1 - 4k^2)^2 = norm(a,b,c,d)^2(1 - 4k^2)^2 = 0. But, since we require (a,b,c,d) to be in M, this reduces to 4(1 - 4k^2)^2 = 0. This implies that

1 - 4k^2 = 0, and hence k = +/-(1/2).

Now, if k=+1/2, then we have a + d = b - c = 0, therefore d = -a and b = c. Since we have det(A) = 0 as one of our constraints on M, this implies that ad - bc =

-(a^2) - (b^2) = -(a^2 + b^2) = 0, which implies a = b = 0, by the property of norms. But, if a = b = 0, then (a,b,c,d) = (0,0,0,0) and hence norm(a,b,c,d)^2 = 0, which is a contradiction.

We can argue analogously for the case where k = -1/2. Hence, assuming that rank(DF) is not 2 for some (a,b,c,d) in M leads to a contradiction, so we conclude that rank(DF)=2 for all x in M.

Finally, from this result, we conclude that since F:R^4-->R^2=R^(4-2), M must be a 2-dimensional manifold.

What do you have to do if it says half?

Divide by 2

For example

Half of 68=34

Half of 34=17

Half of 17=8.5

Let A be a 6by4 matrix and B a 4by6 matrix show that the 6by6 matrix AB can not be invertible?

It is not possible to show that since it is not necessarily true.

There is absolutely nothing in the information that is given in the question which implies that AB is not invertible.

What do you call it when somebody spends 20 years in the 24th row of a theater?

Living in "X" aisle *exile*. Play on words/letters; "X" is the 24th letter of the alphabet, and when read, "X" aisle sounds like "exile".

Prove that the trace of a matrix A is equal to the sum of its eigenvalues?

Given a matrix A=([a,b],[c,d]), the trace of A is a+d, and the det of A is ad-bc.

By using the characteristic equation, and representing the eigenvalues with x, we have the equation

x2-(a+d)x+(ad-bc)=0

Which, using the formula for quadratic equations, gives us the eigenvalues as,

x1=[(a+d)+√((a+d)2-4(ad-bc))]/2

x2=[(a+d)-√((a+d)2-4(ad-bc))]/2

now by adding the two eigenvalues together we get:

x1+x2=(a+d)/2+[√((a+d)2-4(ad-bc))]/2+ (a+d)/2-[√((a+d)2-4(ad-bc))]/2

The square roots cancel each other out being the same value with opposite signs, leaving us with:

x1+x2=(a+d)/2+(a+d)/2

x1+x2= 2(a+d)/2

x1+x2=(a+d)

x1+x2=trace(A)

Q.E.D.

General proofThe above answer only works for 2x2 matrices. I'm going to answer it for nxn matrices. (Not mxn; the question only makes sense when the matrix is square.)The proof uses the following ingredients:

(1) Every nxn matrix is conjugate to an upper-triangular matrix

(2) If A is upper-triangular, then tr(A) is the sum of the eigenvalues of A

(3) If A and B are conjugate, then tr(A) = tr(B)

(4) If A and B are conjugate, then A and B have the same characteristic polynomial (and hence the same sum-of-eigenvalues)

If these are all true, then we can do the following: Given a matrix A, find an upper-triangular matrix U conjugate to A; then (letting s(A) denote the sum of the eigenvalues of A) s(A) = s(U) = tr(U) = tr(A).

Now to prove (1), (2), (3) and (4):

(1) This is an inductive process. First you prove that your matrix is conjugate to one with a 0 in the bottom-left corner. Then you prove that this, in turn, is conjugate to one with 0s at the bottom-left and the one above it. And so on. Eventually you get a matrix with no nonzero entries below the leading diagonal, i.e. an upper-triangular matrix.

(2) Suppose A is upper-triangular, with elements a1, a2, ... , an along the leading diagonal. Let f(t) be the characteristic polynomial of A. So f(t) = det(tI-A). Note that tI-A is also upper-triangular. Therefore its determinant is simply the product of the elements in its leading diagonal. So f(t) = det(tI-A) = (t-a1) * ... * (t-an). And its eigenvalues are a1, ... , an. So the sum of the eigenvalues is a1 + ... + an, which is the sum of the diagonal elements in A.

(3) This is best proved using Summation Convention. Summation convention is a strange but rather useful trick. Basically, the calculations I've written below aren't true as they're written. For each expression, you need to sum over all possible values of the subscripts. For example, where it says b_ii, it really means b11 + b22 + ... + bnn. Where it says bil deltali, it means (b11 delta11 + ... + b1n deltan1) + ... + (bn1 delta1n + ... + bnn deltann). Oh, and deltakj=1 if k=j, and 0 otherwise.

Suppose B = PAP-1. Let's say the element in row j and column k of A is ajk. Similarly, say the (i,j) element of P is pij, the (k,l) element of P-1 is p*kl, and the (i,l) element of B is bil. Then:

bil = pij ajk p*kl

And the trace of B is given by:

tr(B) = bii

= bil deltali

= p*kl deltali pij ajk

= p*kl plj ajk

= deltakj ajk (since p and p* are inverses)

= ajj

= tr(A)

(4) Again, suppose B = PAP-1. Then, for any scalar t, we have tI-B = P(tI-A)P-1. Hence det(tI-B) = det(P).det(tI-A).det(P-1). Since det(P).det(P-1)=det(PP-1)=det(I)=1, we have det(tI-B) = det(tI-A).

What is the greatest common factor of 75x to the power of 3y to the power of 2 and 100xy?

75x(3y)2 and 100xy both have factors of

* * * * *

It would appear that the question is looking for the GCF of 75x3, y2 and 100xy. If that is the case, the answer is 1.

Is the Speed Sensor and the Speedo Head the same thing for a 95 Ford Taurus?

No. The speed sensor is located on the transmission, and is what drives the speedometer cable, which drives the speedo head. The speedo head is another name for the the speedometer, which is the display gauge that you see while sitting in the car.

C program to check whether a given matrix is orthogonal or not?

#include<iostream>

#include<stdio.h>

#include<conio.h>

using namespace std;

int main()

{

int a[20][20],b[20][20],c[20][20],i,j,k,m,n,f;

cout << "Input row and column of A matrix \n\n";

cin >> n >> m;

cout << "\n\nInput A - matrix \n\n";

for(i=0;i<n;++i)

for(j=0;j<m;++j)

cin >> a[i][j];

cout << "\n\nMatrix A : \n\n";

for(i=0;i<n;++i)

{

for(j=0;j<m;++j)

cout << a[i][j] << " ";

cout << "\n\n";

}

for(i=0;i<m;++i)

for(j=0;j<n;++j)

b[i][j]=a[j][i];

cout << "\n\nTranspose of matrix A is : \n\n";

for(i=0;i<m;++i)

{

for(j=0;j<n;++j)

cout << b[i][j] << " ";

cout << "\n\n";

}

for(i=0;i<m;i++)

{

for(j=0;j<m;j++){

c[i][j]=0;

for(k=0;k<=m;k++)

c[i][j]+=a[i][k]*b[k][j];

}

}

for(i=0;i<m;i++)

{

for(j=0;j<m;j++)

{

if((int)c[i][i]==1&&(int)c[i][j]==0)

f=1;

}

}

cout<<"\n\n Matrix A * transpose of A \n\n";

for(i=0;i<m;i++)

{

for(j=0;j<m;j++)

cout << c[i][j];

cout << "\n\n";

}

if(f==1)

cout << "\n\nMatrix A is Orthogonal !!!";

else

cout << "\n\nMatrix A is NOT Orthogonal !!!";

getch();

return 0;

}

-ALOK

How can I prove that similar matrices have same eigenvalues?

First, we'll start with the definition of an eigenvalue. Let v be a non-zero vector and A be a linear transformation acting on v. k is an eigenvalue of the linear transformation A if the following equation is satisfied:

Av = kv

Meaning the linear transformation has just scaled the vector, v, not changed its direction, by the value, k.

By definition, two matrices, A and B, are similar if B = TAT-1, where T is the change of basis matrix.

Let w be some vector that has had its base changed via Tv.

Therefore v = T-1w

We want to show that Bw = kv

Bw = TAT-1w = TAv = Tkv = kTv= kw

Q.E.D.

A porthole is the 'window ' in a ship's cabin, usually round and with a very tough sealable window cover.

If A is an orthogonal matrix then why is it's inverse also orthogonal?

First let's be clear on the definitions.

A matrix M is orthogonal if MT=M-1

Or multiply both sides by M and you have

1) M MT=I

or

2) MTM=I

Where I is the identity matrix.

So our definition tells us a matrix is orthogonal if its transpose equals its inverse or if the product ( left or right) of the the matrix and its transpose is the identity.

Now we want to show why the inverse of an orthogonal matrix is also orthogonal.

Let A be orthogonal. We are assuming it is square since it has an inverse.

Now we want to show that A-1 is orthogonal.

We need to show that the inverse is equal to the transpose.

Since A is orthogonal, A=AT

Let's multiply both sides by A-1

A-1 A= A-1 AT

Or A-1 AT =I

Compare this to the definition above in 1) (M MT=I)

do you see how A-1 now fits the definition of orthogonal?

Or course we could have multiplied on the left and then we would have arrived at 2) above.

What is the pseudocode of Gaussian elimination without pivoting?

Please visit the related link at the bottom. If I tried to write the computer algorithm here it would get very messy.

What is invertible counterpoint?

Invertible counterpoint The contrapuntal design of two or more voices in a polyphonic texture so that any of them may serve as an upper voice or as the bass. Invertible counterpoint involving two (three, four) voices is called double (triple, quadruple) counterpoint. http://www.answers.com/topic/invertible-counterpoint-music

Here is a centimeter scale.

Scroll down to related links and look at "Inches and centimeters in comparison".

What do you call somebody spends 20 years in the 24th row of a theater?

You call somebody who spends 20 years in the 24th row of a theater someone living in "x" aisle.

What is "a 3b"? Is it a3b? or a+3b? 3ab? I think "a3b" is the following: A is an invertible matrix as is B, we also have that the matrices AB, A2B, A3B and A4B are all invertible, prove A5B is invertible. The problem is the sum of invertible matrices may not be invertible. Consider using the characteristic poly?

How the OR addition different from the ordinary addition?

A: OR gates does not perform addition or any other mathematical function but rather makes logical decision on true or false on two [ more] inputs both input are false then the output will be false "0" that is the only premise for an OR gate.

The AND gate perform another logic function such as both [more inputs] must be true "1" for the output to be true/ mathematical calculations are achieved by using binary numbers that a machine [computer] understand

How do you show the direct sum of the image and kernel of the linear operator is the vector space?

Ok, linear algebra isn't my strongest area but I'll have a go (please note that all vectors are in bold, if a mathematical lower case letter is not in bold, assume it to be a scalar unless otherwise indicated).

First, I'm going to assume that you already know how to prove that the kernel and image of a linear operator, say f:U->V where U and V are vector spaces, is a vector subspace of U as this needs to be proved before going on to show that the sum of ker(f) and Im(f) are a vector space.

Now, a vector space must follow the following axioms (here A represents the addition axioms, and M the multiplication axioms):

A1) associativity u+(v+w)=(u+v)+w

A2) commutativity u+v=v+u

A3) identity there exists an element 0 in U such that v+0=v

A4) existence of an inverse for all u in U there exists an element -u in U such that u+(-u)=0

M1) scalar multiplication with respect to vector addition a(u+v)=au+av

M2) scalar multiplication with respect to field addition (a+b)u=au+bu

M3) compatibility of scalar and field multiplication (ab)u=a(bu)

M4) identity 1u=u where 1 is the multiplicative identity

ker(f)={u in U | f(u)=0} (or more strictly, the set of vectors u in U such that Au=0 i.e. ker(f) is the null space of the matrix A), Im(f)={v in V | f(u)=v for some u in U}

Let ker(f)+Im(f) = W. Show W is a vector space i.e. show W satisfies the above axioms.

A1, A2, M1, M2, and M3 are all trivial/simple to prove.

A3: since Im(f) is the set off all v in V obtained from f(u), assume Im(f) = V and as V is a vector space it must have, by definition, an additive identity 0, therefore W also contains 0.

A4: since ker(f) is the null space of the matrix A, f(0)=0. Now assume that f(u)=0 and f(-u)=v, since U is a vector space u+(-u)=0 so f(u+(-u))=f(0)=0 but f(u+(-u))=f(u)+f(-u)=0+vtherefore, 0=0+v which implies v=0, so if u goes to 0 in V, its inverse -u also goes to 0. A similar process for the image shows that if u goes to v then -u goes to -v. (Thanks to Jokes Free4Me for the help with this axiom)

M4: similar to A3 except V, again by definition, must have the multiplicative identity 1 and so also exists in W.

All the axioms are satisfied, therefore W=ker(f)+Im(f) is a vector space. Q.E.D.

How many solutions does a system of equations have when solving results in the statement 3 5?

The answer will depend on statement 3 5 - whatever that may be!