Search results
Results from the WOW.Com Content Network
Matrix multiplication notation. Ask Question Asked 7 years, 11 months ago. Modified 3 years, 7 months ago. ...
Multiplication of blocks will give diagonal $\lambda_1\lambda_2$, first off-diagonal $\lambda_1 + \lambda_2$ and second off-diagonal $1$ so assuming scalar multiplication and addition is commutative so will the jordan blocks.
Which if we write in matrix form, we need to mathematically take the transpose of a vector and do 'matrix' multiplication to get the above dot product. So coming back full circle to the question - matrix multiplication is a tool to find vector dot product (assuming we are talking about matrices in the context of vectors)
All multiplications conform, all sums work out, and the resulting matrix is the size you'd expect. There is nothing special about splitting in two so long as you match any column split of $\mathbf A$ with a row split in $\mathbf B$ (try removing a block row from $\mathbf A$ or further splitting a block column of $\mathbf B$ ).
$\begingroup$ This is the case for a "naive" matrix multiplication algorithm, constructing the output element by element. There are so-called "fast" matrix multiplication algorithms whose flop count will differ from that shown above. $\endgroup$ –
This is the standard definition of matrix multiplication. I must believe that matrix multiplication was defined to deal with this specific problem. The paper continues to mention several properties of matrix multiplication such as non-commutativity, composition with unity and zero and exponentiation. Here is the written rule of composition:
To understand what "of total order $3$" means, consider this idea : if you have a vector, taking inner products with a vector of same length (a tensor of order $1$) gives you a number. If you have a matrix, taking the products with two vectors, one for each dimension of the matrix, you get back a scalar.
In this proof, I'm assuming that "the matrix of the composition is the product of the matrices." (Matrix multiplication is defined so that this is true.) I'm also assuming that "the matrix of the sum is the sum of the matrices". And I'm assuming that if two linear transformations are equal, then their matrices (with respect to given bases) are ...
A matrix represents any finite-discrete linear transform of field values. A matrix is a finite-discrete collection of field values. So if you have a linear transform that converts one matrix to another matrix, then the transform itself can be represented with matrix multiplication.
The proof below is a straightforward application of the definition matrix multiplication, and you can probably find it any linear algebra text. The only "trick" would be switching the order summation in a double summation. Although, if you have a lot of experience with sums, then you probably don't think of this as a "trick".