How to know if a vector is in a subspace spanned by a set of vectors?
$begingroup$
I have $v_1=(1,0,-1),; v_2=(2,1,3),; v_3=(4,2,6); text{ and}; w=(3,1,2),$
where $v_1,v_2,v_3;$ and $w$ are all column vectors.
I want to know if $w$ is in the subspace spanned by $(v_1,v_2,v_3).$
I write down:
$$begin{aligned}x_1+2x_2+4x_3&=3\x_2+2x_3&=1\-x_1+3x_2+6x_3&=2end{aligned}$$
And I augment it to get
$$
begin{pmatrix}
1 & 2 & 4 &3 \
0 & 1 & 2&1 \
-1 & 3 & 6& 2 \
end{pmatrix}
$$
Now, from what the professor told us, I should row reduce it to get
$$
begin{pmatrix}
1 & 2 & 4 &3 \
0 & 1 & 2&1 \
0 & 0 & 0& 0 \
end{pmatrix}
$$
And this is my first question : why should I row reduce it only to the last row being all $0$'s?
Moreover, On the book it states :
Since the dimension of the space of the columns of the augmented matrix
coincides with the dimension of the space of the columns of the matrix
of the coefficients, the system admits a non-trivial solution, and $win
Span {v_1, v_2, v_3}.$
Since I have a form of dyscalculia this statement appears unclear to me. Could somebody please explain in simple words what this means?
What do they mean when they say the dimension of the space of the columns of the augmented matrix coincides with the dimension of the space of the columns of the matrix of the coefficients? Why does this imply that the systems admits a non trivial solution ? What is the non trivial solution here?
If you could help me please please please explain it in the simplest way possible....Thanks for the help guys! You are great.
linear-algebra matrices matrix-equations
$endgroup$
add a comment |
$begingroup$
I have $v_1=(1,0,-1),; v_2=(2,1,3),; v_3=(4,2,6); text{ and}; w=(3,1,2),$
where $v_1,v_2,v_3;$ and $w$ are all column vectors.
I want to know if $w$ is in the subspace spanned by $(v_1,v_2,v_3).$
I write down:
$$begin{aligned}x_1+2x_2+4x_3&=3\x_2+2x_3&=1\-x_1+3x_2+6x_3&=2end{aligned}$$
And I augment it to get
$$
begin{pmatrix}
1 & 2 & 4 &3 \
0 & 1 & 2&1 \
-1 & 3 & 6& 2 \
end{pmatrix}
$$
Now, from what the professor told us, I should row reduce it to get
$$
begin{pmatrix}
1 & 2 & 4 &3 \
0 & 1 & 2&1 \
0 & 0 & 0& 0 \
end{pmatrix}
$$
And this is my first question : why should I row reduce it only to the last row being all $0$'s?
Moreover, On the book it states :
Since the dimension of the space of the columns of the augmented matrix
coincides with the dimension of the space of the columns of the matrix
of the coefficients, the system admits a non-trivial solution, and $win
Span {v_1, v_2, v_3}.$
Since I have a form of dyscalculia this statement appears unclear to me. Could somebody please explain in simple words what this means?
What do they mean when they say the dimension of the space of the columns of the augmented matrix coincides with the dimension of the space of the columns of the matrix of the coefficients? Why does this imply that the systems admits a non trivial solution ? What is the non trivial solution here?
If you could help me please please please explain it in the simplest way possible....Thanks for the help guys! You are great.
linear-algebra matrices matrix-equations
$endgroup$
$begingroup$
Build your matrix by rows, i.e.: take the transpose of your matrix, and now reduce it by rows. Then $;wintext{Span},{v_1,v_2,v_3};$ iff the last row becomes all zeros. In this particular case, since $;v_1,v_2,v_3;$ are not linearly independent, they do not form a basis of $;Bbb R^3;$ and it is thus not guaranteed $;w;$ is a linear combination of them...
$endgroup$
– DonAntonio
Dec 2 '18 at 21:27
$begingroup$
Is it necessary to take the transpose? Can't i just row reduce it ? How do i know when i should stop row reducing?
$endgroup$
– BM97
Dec 2 '18 at 21:31
$begingroup$
What does it mean that the dimension of the space of the columns of the augmented matrix coincides with the dimension of the space of the columns of the matrix of the coefficients?
$endgroup$
– BM97
Dec 2 '18 at 21:33
$begingroup$
The reduction ends when your matrix reaches an echelon form, i.e.: when it is "as diagonal" as possible. This you achieved when the last row became all zeros (this is not always the case, though). Observe that it is clear that once the first column plus once the second column eplux zero the third one equals the fourth column. But imo it is easier to do this with the vectors as rows, and not columns, and reducing by rows: if you this you'll get the last two rows become all zero, which mean those two vectors represented by those rows are lin. comb of the first two.
$endgroup$
– DonAntonio
Dec 2 '18 at 23:02
add a comment |
$begingroup$
I have $v_1=(1,0,-1),; v_2=(2,1,3),; v_3=(4,2,6); text{ and}; w=(3,1,2),$
where $v_1,v_2,v_3;$ and $w$ are all column vectors.
I want to know if $w$ is in the subspace spanned by $(v_1,v_2,v_3).$
I write down:
$$begin{aligned}x_1+2x_2+4x_3&=3\x_2+2x_3&=1\-x_1+3x_2+6x_3&=2end{aligned}$$
And I augment it to get
$$
begin{pmatrix}
1 & 2 & 4 &3 \
0 & 1 & 2&1 \
-1 & 3 & 6& 2 \
end{pmatrix}
$$
Now, from what the professor told us, I should row reduce it to get
$$
begin{pmatrix}
1 & 2 & 4 &3 \
0 & 1 & 2&1 \
0 & 0 & 0& 0 \
end{pmatrix}
$$
And this is my first question : why should I row reduce it only to the last row being all $0$'s?
Moreover, On the book it states :
Since the dimension of the space of the columns of the augmented matrix
coincides with the dimension of the space of the columns of the matrix
of the coefficients, the system admits a non-trivial solution, and $win
Span {v_1, v_2, v_3}.$
Since I have a form of dyscalculia this statement appears unclear to me. Could somebody please explain in simple words what this means?
What do they mean when they say the dimension of the space of the columns of the augmented matrix coincides with the dimension of the space of the columns of the matrix of the coefficients? Why does this imply that the systems admits a non trivial solution ? What is the non trivial solution here?
If you could help me please please please explain it in the simplest way possible....Thanks for the help guys! You are great.
linear-algebra matrices matrix-equations
$endgroup$
I have $v_1=(1,0,-1),; v_2=(2,1,3),; v_3=(4,2,6); text{ and}; w=(3,1,2),$
where $v_1,v_2,v_3;$ and $w$ are all column vectors.
I want to know if $w$ is in the subspace spanned by $(v_1,v_2,v_3).$
I write down:
$$begin{aligned}x_1+2x_2+4x_3&=3\x_2+2x_3&=1\-x_1+3x_2+6x_3&=2end{aligned}$$
And I augment it to get
$$
begin{pmatrix}
1 & 2 & 4 &3 \
0 & 1 & 2&1 \
-1 & 3 & 6& 2 \
end{pmatrix}
$$
Now, from what the professor told us, I should row reduce it to get
$$
begin{pmatrix}
1 & 2 & 4 &3 \
0 & 1 & 2&1 \
0 & 0 & 0& 0 \
end{pmatrix}
$$
And this is my first question : why should I row reduce it only to the last row being all $0$'s?
Moreover, On the book it states :
Since the dimension of the space of the columns of the augmented matrix
coincides with the dimension of the space of the columns of the matrix
of the coefficients, the system admits a non-trivial solution, and $win
Span {v_1, v_2, v_3}.$
Since I have a form of dyscalculia this statement appears unclear to me. Could somebody please explain in simple words what this means?
What do they mean when they say the dimension of the space of the columns of the augmented matrix coincides with the dimension of the space of the columns of the matrix of the coefficients? Why does this imply that the systems admits a non trivial solution ? What is the non trivial solution here?
If you could help me please please please explain it in the simplest way possible....Thanks for the help guys! You are great.
linear-algebra matrices matrix-equations
linear-algebra matrices matrix-equations
edited Dec 10 '18 at 20:43
user376343
3,7883828
3,7883828
asked Dec 2 '18 at 21:23
BM97BM97
758
758
$begingroup$
Build your matrix by rows, i.e.: take the transpose of your matrix, and now reduce it by rows. Then $;wintext{Span},{v_1,v_2,v_3};$ iff the last row becomes all zeros. In this particular case, since $;v_1,v_2,v_3;$ are not linearly independent, they do not form a basis of $;Bbb R^3;$ and it is thus not guaranteed $;w;$ is a linear combination of them...
$endgroup$
– DonAntonio
Dec 2 '18 at 21:27
$begingroup$
Is it necessary to take the transpose? Can't i just row reduce it ? How do i know when i should stop row reducing?
$endgroup$
– BM97
Dec 2 '18 at 21:31
$begingroup$
What does it mean that the dimension of the space of the columns of the augmented matrix coincides with the dimension of the space of the columns of the matrix of the coefficients?
$endgroup$
– BM97
Dec 2 '18 at 21:33
$begingroup$
The reduction ends when your matrix reaches an echelon form, i.e.: when it is "as diagonal" as possible. This you achieved when the last row became all zeros (this is not always the case, though). Observe that it is clear that once the first column plus once the second column eplux zero the third one equals the fourth column. But imo it is easier to do this with the vectors as rows, and not columns, and reducing by rows: if you this you'll get the last two rows become all zero, which mean those two vectors represented by those rows are lin. comb of the first two.
$endgroup$
– DonAntonio
Dec 2 '18 at 23:02
add a comment |
$begingroup$
Build your matrix by rows, i.e.: take the transpose of your matrix, and now reduce it by rows. Then $;wintext{Span},{v_1,v_2,v_3};$ iff the last row becomes all zeros. In this particular case, since $;v_1,v_2,v_3;$ are not linearly independent, they do not form a basis of $;Bbb R^3;$ and it is thus not guaranteed $;w;$ is a linear combination of them...
$endgroup$
– DonAntonio
Dec 2 '18 at 21:27
$begingroup$
Is it necessary to take the transpose? Can't i just row reduce it ? How do i know when i should stop row reducing?
$endgroup$
– BM97
Dec 2 '18 at 21:31
$begingroup$
What does it mean that the dimension of the space of the columns of the augmented matrix coincides with the dimension of the space of the columns of the matrix of the coefficients?
$endgroup$
– BM97
Dec 2 '18 at 21:33
$begingroup$
The reduction ends when your matrix reaches an echelon form, i.e.: when it is "as diagonal" as possible. This you achieved when the last row became all zeros (this is not always the case, though). Observe that it is clear that once the first column plus once the second column eplux zero the third one equals the fourth column. But imo it is easier to do this with the vectors as rows, and not columns, and reducing by rows: if you this you'll get the last two rows become all zero, which mean those two vectors represented by those rows are lin. comb of the first two.
$endgroup$
– DonAntonio
Dec 2 '18 at 23:02
$begingroup$
Build your matrix by rows, i.e.: take the transpose of your matrix, and now reduce it by rows. Then $;wintext{Span},{v_1,v_2,v_3};$ iff the last row becomes all zeros. In this particular case, since $;v_1,v_2,v_3;$ are not linearly independent, they do not form a basis of $;Bbb R^3;$ and it is thus not guaranteed $;w;$ is a linear combination of them...
$endgroup$
– DonAntonio
Dec 2 '18 at 21:27
$begingroup$
Build your matrix by rows, i.e.: take the transpose of your matrix, and now reduce it by rows. Then $;wintext{Span},{v_1,v_2,v_3};$ iff the last row becomes all zeros. In this particular case, since $;v_1,v_2,v_3;$ are not linearly independent, they do not form a basis of $;Bbb R^3;$ and it is thus not guaranteed $;w;$ is a linear combination of them...
$endgroup$
– DonAntonio
Dec 2 '18 at 21:27
$begingroup$
Is it necessary to take the transpose? Can't i just row reduce it ? How do i know when i should stop row reducing?
$endgroup$
– BM97
Dec 2 '18 at 21:31
$begingroup$
Is it necessary to take the transpose? Can't i just row reduce it ? How do i know when i should stop row reducing?
$endgroup$
– BM97
Dec 2 '18 at 21:31
$begingroup$
What does it mean that the dimension of the space of the columns of the augmented matrix coincides with the dimension of the space of the columns of the matrix of the coefficients?
$endgroup$
– BM97
Dec 2 '18 at 21:33
$begingroup$
What does it mean that the dimension of the space of the columns of the augmented matrix coincides with the dimension of the space of the columns of the matrix of the coefficients?
$endgroup$
– BM97
Dec 2 '18 at 21:33
$begingroup$
The reduction ends when your matrix reaches an echelon form, i.e.: when it is "as diagonal" as possible. This you achieved when the last row became all zeros (this is not always the case, though). Observe that it is clear that once the first column plus once the second column eplux zero the third one equals the fourth column. But imo it is easier to do this with the vectors as rows, and not columns, and reducing by rows: if you this you'll get the last two rows become all zero, which mean those two vectors represented by those rows are lin. comb of the first two.
$endgroup$
– DonAntonio
Dec 2 '18 at 23:02
$begingroup$
The reduction ends when your matrix reaches an echelon form, i.e.: when it is "as diagonal" as possible. This you achieved when the last row became all zeros (this is not always the case, though). Observe that it is clear that once the first column plus once the second column eplux zero the third one equals the fourth column. But imo it is easier to do this with the vectors as rows, and not columns, and reducing by rows: if you this you'll get the last two rows become all zero, which mean those two vectors represented by those rows are lin. comb of the first two.
$endgroup$
– DonAntonio
Dec 2 '18 at 23:02
add a comment |
0
active
oldest
votes
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3023214%2fhow-to-know-if-a-vector-is-in-a-subspace-spanned-by-a-set-of-vectors%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
0
active
oldest
votes
0
active
oldest
votes
active
oldest
votes
active
oldest
votes
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3023214%2fhow-to-know-if-a-vector-is-in-a-subspace-spanned-by-a-set-of-vectors%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
Build your matrix by rows, i.e.: take the transpose of your matrix, and now reduce it by rows. Then $;wintext{Span},{v_1,v_2,v_3};$ iff the last row becomes all zeros. In this particular case, since $;v_1,v_2,v_3;$ are not linearly independent, they do not form a basis of $;Bbb R^3;$ and it is thus not guaranteed $;w;$ is a linear combination of them...
$endgroup$
– DonAntonio
Dec 2 '18 at 21:27
$begingroup$
Is it necessary to take the transpose? Can't i just row reduce it ? How do i know when i should stop row reducing?
$endgroup$
– BM97
Dec 2 '18 at 21:31
$begingroup$
What does it mean that the dimension of the space of the columns of the augmented matrix coincides with the dimension of the space of the columns of the matrix of the coefficients?
$endgroup$
– BM97
Dec 2 '18 at 21:33
$begingroup$
The reduction ends when your matrix reaches an echelon form, i.e.: when it is "as diagonal" as possible. This you achieved when the last row became all zeros (this is not always the case, though). Observe that it is clear that once the first column plus once the second column eplux zero the third one equals the fourth column. But imo it is easier to do this with the vectors as rows, and not columns, and reducing by rows: if you this you'll get the last two rows become all zero, which mean those two vectors represented by those rows are lin. comb of the first two.
$endgroup$
– DonAntonio
Dec 2 '18 at 23:02