Project vector a onto vector b: scalar projection, vector projection, rejection, angle between vectors, decomposition table, visual breakdown, and orthogonality verification.
Vector projection decomposes a vector a into two orthogonal components: one parallel to a reference vector b (the projection) and one perpendicular to b (the rejection). This decomposition is one of the most widely used operations in linear algebra, physics, and engineering — from resolving forces along inclined planes to computing the closest point on a line.
This calculator computes both the scalar projection (comp_b(a) = a · b / ‖b‖, a signed scalar measuring how far a extends in b's direction) and the vector projection (proj_b(a) = (a · b / b · b) b, the actual vector component along b). The rejection a − proj_b(a) is the leftover perpendicular part. Together, projection + rejection perfectly reconstruct the original vector a.
The output also includes the angle between a and b, the dot product, and a relationship check (parallel, perpendicular, or general). A verification card confirms that projection and rejection are orthogonal by checking their dot product equals zero — a built-in sanity check.
The decomposition table shows a, b, projection, rejection, and their sum (which should match a) side by side. A stacked bar visualization illustrates how the magnitudes of projection and rejection partition ‖a‖, while component-wise bars break the decomposition down by axis. Six presets cover textbook scenarios including perpendicular vectors (projection = 0), parallel vectors (rejection = 0), and general cases.
Understanding vector projection is essential for least-squares regression, the Gram-Schmidt process, work computation in physics (W = F · d), and graphics (shadow casting, light reflection). This tool makes the geometry of projection tangible through numbers and visuals.
Computing the projection requires a dot product, a second dot product (or magnitude squared), a scalar division, and a vector scaling — four operations where mixing up a and b or miscomputing the denominator yields the wrong result entirely. This calculator computes scalar projection, vector projection, rejection, and the angle between a and b for 2D and 3D vectors. It verifies orthogonality (proj · rej ≈ 0), shows a decomposition bar chart, and includes 6 presets covering textbook, perpendicular, and parallel cases. Essential for understanding work, least squares, and Gram-Schmidt.
comp_b(a) = a · b / ‖b‖; proj_b(a) = (a · b / b · b) b; rej_b(a) = a − proj_b(a); θ = arccos(a · b / (‖a‖‖b‖))
Result: proj_b(a) ≈ (1.662, 2.078, 2.494), rej_b(a) ≈ (−0.662, −0.078, 0.506), θ ≈ 12.93°
a · b = 4 + 10 + 18 = 32. b · b = 16 + 25 + 36 = 77. proj = (32/77)(4, 5, 6) ≈ (1.662, 2.078, 2.494). rej = a − proj ≈ (−0.662, −0.078, 0.506). θ = arccos(32 / (√14 · √77)) ≈ 12.93°.
The **scalar projection** of a onto b is comp_b(a) = a · b / ‖b‖, a signed number representing how far a extends in b’s direction. The **vector projection** is proj_b(a) = (a · b / b · b) b, which is the actual vector component of a along b. The projection is positive when the angle is acute, zero when perpendicular, and negative (pointing opposite to b) when obtuse. Crucially, proj_b(a) ≠ proj_a(b) — projection is **not symmetric**, and swapping a and b produces a vector in a different direction.
The **rejection** rej_b(a) = a − proj_b(a) is the component of a perpendicular to b. Together, projection and rejection produce an **orthogonal decomposition**: a = proj_b(a) + rej_b(a), with proj ⊥ rej. This decomposition is the heart of the **Gram-Schmidt process**, which builds an orthonormal basis by repeatedly subtracting projections onto previously orthogonalized vectors. The Pythagorean theorem applies: ‖a‖² = ‖proj‖² + ‖rej‖², providing a consistency check.
In **physics**, work W = F · d = ‖F‖ · comp_d(F) is the scalar projection of force along displacement. In **least squares regression**, the solution â = proj_{col(X)} b projects the observation vector onto the column space of X, minimizing the residual (rejection). In **signal processing**, projecting a signal onto a basis function extracts that frequency component (Fourier analysis). In **robotics**, projecting a desired trajectory onto the feasible space produces the closest achievable motion. Understanding projection and rejection is how mathematicians and engineers decompose complex problems into simpler, orthogonal parts.
Last updated:
The scalar projection is a single number measuring how far a extends in b's direction (positive or negative). The vector projection is the actual vector component of a along b — it has both magnitude and direction (along b).
The rejection is a − proj_b(a), the part of a perpendicular to b. Projection and rejection together reconstruct a: a = proj_b(a) + rej_b(a). The rejection is always orthogonal to b.
The projection is zero when a is perpendicular to b (a · b = 0). In this case, a has no component along b's direction, and the rejection equals a itself.
Yes! Scaling b doesn't change the projection because the formula proj_b(a) = (a · b / b · b) b is invariant to scaling. The direction matters, not the length.
Gram-Schmidt orthogonalizes vectors by iteratively subtracting projections. To make v₂ orthogonal to u₁, compute v₂ − proj_{u₁}(v₂) — this is exactly the rejection of v₂ with respect to u₁.
A negative scalar projection means the angle between a and b is obtuse (> 90°). The vector projection then points in the opposite direction to b, meaning a's component along b opposes b's direction.