Here’s a quick index to all the problems in this chapter.
The main index can be found here.
I. (a) Using Plücker line coordinates, $\mathcal{L}$, write an expression for the point of intersection of a line with a plane, and the plane defined by a point and a line.
(b) Now derive the condition for a point to be on a line, and a line to be on a plane.
(c) Show that parallel planes intersect in a line on $\pi_{\infty}$.
(d) Show that parallel lines intersect on $\pi_{\infty}$
(a) We know that the point of intersection $X$ of a line with Plücker matrix $L$ and a plane $\pi$ is given by
$$\textbf{X} = L \boldsymbol{\pi}$$
Expanding this using Plücker coordinates ${l_{12}, l_{13}, l_{14}, l_{24}, l_{42}, l_{34}}$ as elements of the skew-symmetrix matrix $L$ , we get
$$\begin{pmatrix} X_{1} \\ X_{2} \\ X_{3} \\ X_{4} \end{pmatrix} = \begin{pmatrix} 0 & l_{12} & l_{13} & l_{14} \\ -l_{12} & 0 & l_{23} & -l_{42} \\ -l_{13} & -l_{23} & 0 & l_{34} \\ -l_{14} & l_{42} & -l_{34} & 0 \end{pmatrix}\begin{pmatrix} \pi_{1} \\ \pi_{2} \\ \pi_{3} \\ \pi_{4} \end{pmatrix}$$
The bottom left $3 \times 3$ submatrix of $L$ is a skew-symmetric matrix by itself, and as we saw in the solution to exercise 8 of chapter 2, multiplication by a $3 \times 3$ skew-symmetrix matrix represents a cross product.
So, the matrix product
$$\begin{pmatrix} 0 & l_{23} & -l_{42} \\ -l_{23} & 0 & l_{34} \\ l_{42} & -l_{34} & 0 \end{pmatrix}\begin{pmatrix} \pi_{2} \\ \pi_{3} \\ \pi_{4} \end{pmatrix}$$
can be represented as the cross product
$$-\textbf{b} \times \boldsymbol{\hat{\pi}} = \boldsymbol{\hat{\pi}} \times \textbf{b}$$
where $\textbf{b} = [l_{34}, l_{42}, l_{23}]$ and $\boldsymbol{\hat{\pi}} = {\pi_2, \pi_3, \pi_4}$.
We can now rewrite the matrix product as follows
$$\begin{pmatrix} X_{1} \\ X_{2} \\ X_{3} \\ X_{4} \end{pmatrix} = \begin{pmatrix} 0 & \textbf{a} \\ -\textbf{a} & -[\textbf{b}]_{\times} \end{pmatrix}\begin{pmatrix} \pi_{1} \\ \pi_{2} \\ \pi_{3} \\ \pi_{4} \end{pmatrix}$$
where $\textbf{a} = [l_{12}, l_{13}, l_{14}]$.
Then we get $$X_1 = \textbf{a} \cdot \boldsymbol{\hat{\pi}}$$ $$\boldsymbol{\hat{X}} = -[\textbf{b}]_{\times} \boldsymbol{\hat{\pi}} - \pi_1 \textbf{a} = \boldsymbol{\hat{\pi}} \times \textbf{b} - \pi_1 \textbf{a}$$
where $\boldsymbol{\hat{X}} = [X_2, X_3, X_4]$.
This is the equation of line-plane intersection in Plücker coordinates $\mathcal{L} = (\textbf{a}, \textbf{b})$ where $\textbf{a} = [l_{12}, l_{13}, l_{14}]$ and $\textbf{b} = [l_{34}, l_{42}, l_{23}]$ and $\textbf{a} \cdot \textbf{b} = 0$ (the Klein quadric constraint).
Similarly, from the expansion of the equation $\boldsymbol{\pi} = L^{*}\textbf{X}$, we can derive the equation of a plane defined by a point and a line in Plücker coordinates to be
$$\pi_1 = \textbf{b} \cdot \boldsymbol{\hat{X}}$$ $$\boldsymbol{\hat{\pi}} = \boldsymbol{\hat{X}} \times \textbf{a} - X_1 \textbf{b}$$
(b) For a point $\textbf{X}$ to be on a line $L$, it must satisfy the condition $L^{*}\textbf{X} = 0$. In Plücker coordinates, this would imply $$\textbf{b} \cdot \boldsymbol{\hat{X}} = 0$$ and $$\boldsymbol{\hat{X}} \times \textbf{a} - X_1 \textbf{b} = 0 \implies \boldsymbol{\hat{X}} \times \textbf{a} = X_1 \textbf{b}$$
Similarly for a line $L$ to be on a plane $\boldsymbol{\pi}$, it must satisfy the condition $L\boldsymbol{\pi} = 0$. In Plücker coordinates, this would imply $$\textbf{a} \cdot \boldsymbol{\hat{\pi}} = 0$$ and $$\boldsymbol{\hat{\pi}} \times \textbf{b} = \pi_1 \textbf{a}$$ Note that equality in these equations is not upto scale.
(c) Parallel planes have the same plane normal. Let’s the two parallel planes be $$\boldsymbol{\pi_1} = (\textbf{n}^T, d_1)$$ $$\boldsymbol{\pi_2} = (\textbf{n}^T, d_2)$$ where $\textbf{n} = (n_1, n_2, n_3)$ is the common plane normal.
The line of intersection of these two planes, in dual Plücker matrix representation is given by
$$L^{*} = \boldsymbol{\pi_1\pi_2}^T - \boldsymbol{\pi_2\pi_1}^T$$
$$= \begin{pmatrix} \textbf{n} \\ d_1 \end{pmatrix} \begin{pmatrix} \textbf{n}^T & d_2 \end{pmatrix} - \begin{pmatrix} \textbf{n} \\ d_2 \end{pmatrix} \begin{pmatrix} \textbf{n}^T & d_1 \end{pmatrix}$$
$$= \begin{pmatrix} \textbf{nn}^T & d_2\textbf{n} \\ d_1\textbf{n}^T & d_1d_2 \end{pmatrix} - \begin{pmatrix} \textbf{nn}^T & d_1\textbf{n} \\ d_2\textbf{n}^T & d_1d_2 \end{pmatrix}$$
$$= \begin{pmatrix} 0 & (d_2 - d_1)\textbf{n} \\ (d_1 - d_2)\textbf{n}^T & 0 \end{pmatrix}$$
$$= \begin{pmatrix} 0 & -\textbf{n} \\ \textbf{n}^T & 0 \end{pmatrix}$$
From this, we get the Plücker coordinates to be $(n_3, -n_2, 0, n_1, 0, 0)$, i.e $\textbf{a} = (n_3, -n_2, 0)$ and $\textbf{b} = (0, 0, n_1)$.
Using the conditions we derived in (b), we see that
$$\textbf{a} \cdot \boldsymbol{\hat{\pi}_{\infty}} = (n_3, -n_2, 0) \cdot (0, 0, 1) = 0$$
and
$$\boldsymbol{\hat{\pi}_{\infty}} \times \textbf{b} = (0, 0, 1) \times (0, 0, n_1) = \textbf{0}$$
$$\pi_1 \textbf{a} = 0 * \textbf{a} = \textbf{0}$$
Hence, we can conclude that the intersection of parallel planes is a line that lies on $\boldsymbol{\pi_{\infty}}$.
(d) Parallel lines in 3-space are by definition coplanar and thus must intersect in $\mathbb{P}^3$. Consider the two parallel planes from the pencils of planes of which the given parallel lines are the axes. From (c), these two planes intersect in a line that lies on $\boldsymbol{\pi_{\infty}}$. Hence the two parallel lines must also intersect on this line.
II. Show that a (real) projective transformation of 3-space can map an ellipsoid to a paraboloid or hyperboloid of two sheets, but cannot map an ellipsoid to a hyperboloid of one sheet (i.e. a surface with real rulings).
We can diagonalize ellipsoids, paraboloids and two-sheeted hyperboloids such that the diagonal is $(1, 1, 1, -1)$.
For a general ellipsoid, using row scaling, this metric diagonalization takes the form
$$C_e = S_e^T \begin{pmatrix} \sqrt{a} & 0 & 0 & 0 \\ 0 & \sqrt{b} & 0 & 0 \\ 0 & 0 & \sqrt{c} & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & -1 \end{pmatrix} \begin{pmatrix} \sqrt{a} & 0 & 0 & 0 \\ 0 & \sqrt{b} & 0 & 0 \\ 0 & 0 & \sqrt{c} & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix} S_e $$
where $S_e$ is a similarity transformation that translates and rotates the ellipsoid so that it is centered at the origin, its axes are aligned with the cartesian axes and its equation is reduced to the canonical form $ax^2 + by^2 + cz^2 = 1$.
Combining the transformation matrices, this transformation can be rewritten as $C_e = H_e^TDH_e$.
Similarly, using spectral decomposition and row scaling, the metric diagonalization of a general paraboloid takes the form
$$C_p = S_p^T \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\ 0 & 0 & -\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \end{pmatrix} \begin{pmatrix} \sqrt{a} & 0 & 0 & 0 \\ 0 & \sqrt{b} & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & -1 \end{pmatrix} \begin{pmatrix} \sqrt{a} & 0 & 0 & 0 \\ 0 & \sqrt{b} & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & \frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} \\ 0 & 0 & \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \end{pmatrix}S_p$$
where $S_p$ is a similarity transformation that translates and rotates the paraboloid so that it is centered at the origin, its axis is aligned with the z-axis and its equation is reduced to the canonical form $ax^2 + by^2 = z$.
Combining the transformation matrices, this transformation can be rewritten as $C_p = H_p^TDH_p$.
Finally, using permutation matrices and row scaling, the metric diagonalization of a two-sheeted hyperboloid takes the form
$$P = S_h^T \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{pmatrix} \begin{pmatrix} \sqrt{a} & 0 & 0 & 0 \\ 0 & \sqrt{b} & 0 & 0 \\ 0 & 0 & \sqrt{c} & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & -1 \end{pmatrix} \begin{pmatrix} \sqrt{a} & 0 & 0 & 0 \\ 0 & \sqrt{b} & 0 & 0 \\ 0 & 0 & \sqrt{c} & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{pmatrix} S_h$$
where $S_h$ is a similarity transformation that translates and rotates the hyperboloid so that it is centered at the origin, its axis is aligned with the z-axis and its equation is reduced to the canonical form $ax^2 + by^2 - cz^2 + 1 = 0$.
Combining the transformation matrices, this transformation can be rewritten as $C_h = H_h^TDH_h$.
As all the transformations $H_e, H_p, H_h$ are invertible, we can then easily convert between ellipsoids, paraboloids and hyperboloids. For example, to convert from an ellipsoid $C_e$ to a paraboloid $C_p$, we can use the transform
$$C_p = H_p^TDH_p$$ $$C_p = H_p^T(H_e^{-T}C_eH_e^{-1})H_p$$ $$C_p = (H_e^{-1}H_p)^TC_e(H_e^{-1}H_p)$$
Similarly, to convert from an ellipsoid $C_e$ to a paraboloid $C_h$, we can use the transform
$$C_h = (H_e^{-1}H_h)^TC_e(H_e^{-1}H_h)$$
Hence ellipsoids, paraboloids and two-sheeted hyperboloids are projectively equivalent.
The second result is direct consequences of Sylvester’s law of inertia1 which states that congruent matrices have the same matrix signature. Two matrices $A$ and $B$ are said to be congruent if they are related by an invertible transformation $S$ such that $S^TAS = B$. Inversely, this means if two matrices have different matrix signatures then they can not be congruent.
We know that the matrix signature of an ellipsoid is 2 and that of a one-sheeted hyperboloid is 0. Hence these matrices can not be congruent. This means there is no projective transformation $H$ such that $H^TC_eH = C_{osh}$, where $C_e$ is the quadratic form of an ellipsoid and $C_{osh}$ is the quadratic form of a one-sheeted hyperboloid.
Note that the first result can also be proven using Sylvester’s law of inertia. As ellipsoids, paraboloids and two-sheeted hyperboloids have the same matrix signature of 2, they are congruent.
III. Show that the $4 \times 4$ matrix representing the Euclidean transformation ${R, t}$ (with $a$ the direction of the rotation axis, i.e. $Ra = a$) has two complex conjugate eigenvalues, and two equal real eigenvalues, and the following eigenvector structure:
(a) if $a$ is perpendicular to $t$, then the eigenvectors corresponding to the real eigenvalues are distinct;
(b) otherwise, the eigenvectors corresponding to the real eigenvalues are coincident, and on $\pi_{\infty}$.
What do the eigenvectors corresponding to the complex eigenvalues represent?
Let $H_E = \begin{pmatrix} R & t \\ 0 & 1 \end{pmatrix}$
Let $CP(R)$ represent the characteristic polynomial of $R$. Then the characteristic polynomial in $\lambda$ of the given matrix will be $CP(H_E) = (1 - \lambda) * CP(R)$. Hence the eigenvalues of $H_E$ will be 1 along with the eigenvalues of $R$.
As $R$ is an orthogonal matrix, the absolute value of each of its eigenvalues must be 1. As $R$ is also a rotation matrix, its eigenvalue can not be -1. So $R$ either has a repeated eigenvalue of 1 or it has one eigenvalue of 1 and two complex conjugate eigenvalues with their absolute values equal to 1 (complex eigenvalues come in complex conjugate pairs). The only rotation matrix that has a repeated eigenvalue of 1 is the identity matrix, as the eigenvectors of such a matrix must span the whole vector space and the identity matrix is the only one that does so. Hence, we can conclude that the eigenvalues of a rotation matrix are 1 and two complex conjugate values and that the eigenvalues of $H_e$ are two equal real eigenvalues (equal to 1) and two complex conjugate eigenvalues.
For the eigenvectors corresponding to the real eigenvalues to be distinct, the matrix $H_e - I$ should have nullspace dimension of 2. By the rank nullity theorem, this means the dimension of the column space of $H_e - I$ should be 4 - 2 = 2.
As $R - I$ already has a 2 dimensional column space (only one eigenvector corresponding to eigenvalue 1), for $H_e - I$ to have a 2 dimensional column space, $t$ should be in the column space of $R - I$ .
We know that $a$ is an eigenvector of $R$ ($Ra = a$). As $R^T = R^{-1}$ and as the eigenvectors of a matrix and its inverse are the same, $a$ lies in the nullspace of $R^T - I = (R - I)^T$. Hence $a$ lies in the left nullspace of $R - I$.
As the column space and left nullspace of any matrix are orthogonal, $a$ (from the left nullspace of $R - I$) and $t$ (from the column space of $R - I$) must be orthogonal for the eigenvectors of $H_e$ corresponding to the real eigenvalue 1 to be distinct (planar motion).
If $t$ is not in the column space of $R - I$, then the rank of the matrix $H_e - I$ will be 3, meaning that its nullspace will be 1 dimensional. Hence the eigenvectors corresponding to the real-valued eigenvalue will be coincident. Further, from the structure of $H_e$, we can see that the eigenvector that spans this nullspace will be $[c, 0]$, where $c$ is the eigenvector of $R$. This clearly lies on $\boldsymbol{\pi_\infty}$.
The two complex conjugate eigenvectors are the fixed points of the rotation and they clearly lie on $\boldsymbol{\pi_\infty}$. For rotation matrices, the complex conjugate eigenvectors must lie on the plane orthogonal to $a$ 2. We also know that rotations preserve circular points at infinity. Hence, we can conclude that the complex conjugate eigenvectors of the rotation matrix and hence $H_e$ are the circular points at infinity for the planes orthogonal to the rotation axis.
References
- Wikipedia. Sylvester’s law of inertia. https://en.wikipedia.org/wiki/Sylvester%27s_law_of_inertia. [return]
- Joel Burdick, Caltech Robotics. Section 3, Notes on Rotations. http://robotics.caltech.edu/~jwb/courses/ME115/handouts/rotation.pdf [return]