Vector space of a Function (Example for understanding the concept)
In my textbook is stated:
Let G be a finite-dimensional vector space of real functions in $R^D$.
What is meant by "vector space of real functions"?
I know what a vector space is, by I don't get how can real functions form vector space (The only vector spaces that I might see regarding a function are the vector space of the domain and codomain)
Please, if you are aware, provide me a tangible and intuitive example with the explanation, as I find examples extremely useful for understanding.
functions vector-spaces
add a comment |
In my textbook is stated:
Let G be a finite-dimensional vector space of real functions in $R^D$.
What is meant by "vector space of real functions"?
I know what a vector space is, by I don't get how can real functions form vector space (The only vector spaces that I might see regarding a function are the vector space of the domain and codomain)
Please, if you are aware, provide me a tangible and intuitive example with the explanation, as I find examples extremely useful for understanding.
functions vector-spaces
Do you know the axioms for a vector space? For example, do you understand why the set of polynomials with coefficients from $mathbb R$ is a vector space over $mathbb R$?
– John Douma
Dec 20 at 16:36
You just need to define addition of two functions and multiplication of a function by a real number. What could that look like, do you think?
– fkraiem
Dec 20 at 16:37
Don't you mean infinite dimensional?
– Math_QED
Dec 20 at 16:45
No, it's finite dimensional
– Tommaso Bendinelli
Dec 20 at 17:40
add a comment |
In my textbook is stated:
Let G be a finite-dimensional vector space of real functions in $R^D$.
What is meant by "vector space of real functions"?
I know what a vector space is, by I don't get how can real functions form vector space (The only vector spaces that I might see regarding a function are the vector space of the domain and codomain)
Please, if you are aware, provide me a tangible and intuitive example with the explanation, as I find examples extremely useful for understanding.
functions vector-spaces
In my textbook is stated:
Let G be a finite-dimensional vector space of real functions in $R^D$.
What is meant by "vector space of real functions"?
I know what a vector space is, by I don't get how can real functions form vector space (The only vector spaces that I might see regarding a function are the vector space of the domain and codomain)
Please, if you are aware, provide me a tangible and intuitive example with the explanation, as I find examples extremely useful for understanding.
functions vector-spaces
functions vector-spaces
asked Dec 20 at 16:20
Tommaso Bendinelli
977
977
Do you know the axioms for a vector space? For example, do you understand why the set of polynomials with coefficients from $mathbb R$ is a vector space over $mathbb R$?
– John Douma
Dec 20 at 16:36
You just need to define addition of two functions and multiplication of a function by a real number. What could that look like, do you think?
– fkraiem
Dec 20 at 16:37
Don't you mean infinite dimensional?
– Math_QED
Dec 20 at 16:45
No, it's finite dimensional
– Tommaso Bendinelli
Dec 20 at 17:40
add a comment |
Do you know the axioms for a vector space? For example, do you understand why the set of polynomials with coefficients from $mathbb R$ is a vector space over $mathbb R$?
– John Douma
Dec 20 at 16:36
You just need to define addition of two functions and multiplication of a function by a real number. What could that look like, do you think?
– fkraiem
Dec 20 at 16:37
Don't you mean infinite dimensional?
– Math_QED
Dec 20 at 16:45
No, it's finite dimensional
– Tommaso Bendinelli
Dec 20 at 17:40
Do you know the axioms for a vector space? For example, do you understand why the set of polynomials with coefficients from $mathbb R$ is a vector space over $mathbb R$?
– John Douma
Dec 20 at 16:36
Do you know the axioms for a vector space? For example, do you understand why the set of polynomials with coefficients from $mathbb R$ is a vector space over $mathbb R$?
– John Douma
Dec 20 at 16:36
You just need to define addition of two functions and multiplication of a function by a real number. What could that look like, do you think?
– fkraiem
Dec 20 at 16:37
You just need to define addition of two functions and multiplication of a function by a real number. What could that look like, do you think?
– fkraiem
Dec 20 at 16:37
Don't you mean infinite dimensional?
– Math_QED
Dec 20 at 16:45
Don't you mean infinite dimensional?
– Math_QED
Dec 20 at 16:45
No, it's finite dimensional
– Tommaso Bendinelli
Dec 20 at 17:40
No, it's finite dimensional
– Tommaso Bendinelli
Dec 20 at 17:40
add a comment |
5 Answers
5
active
oldest
votes
$mathbb R^D$ is the set of all functions $f:D to mathbb R.$ If we define an addition $f+g$ and a scalar multiplication $ alpha f$ in this set by
$(f+g)(x)=f(x)+g(x)$ and $( alpha f)(x)= alpha f(x)$,
then $mathbb R^D$ is a real vector space ( of functions).
add a comment |
Take a collection of functions and see if you can demonstrate all the properties of a vector space using them.
A vector space requires:
- An additive identity (written $0$ in ${mathbb R}$). The function $fequiv 0$ fulfills this need.
- A scalar multiplicative identity (written $1$ in ${mathbb R}$). $1$ works here since $1cdot f = f$
- Commutativity of addition: $f+g = g+f$
- Associativity of addition: $f+(g+h) = (f+g)+h$
- Associativity of scalar multiplication: $alpha (beta f) = (alpha beta)f$
- Distributivity of scalars: $(alpha + beta)f = alpha f + beta f$
- Distributivity of scalars over vector addition: $alpha(f+g) = alpha f + alpha g$
- An additive inverse: given $f$ there exists $g$ such that $f+g = 0$. Obviously $g=-f$ satisfies this.
Can we define the addition of two functions (f + g)(x) as f(x) + g(x) for any given function?
– Tommaso Bendinelli
Dec 20 at 17:37
Yes, exactly so
– postmortes
Dec 20 at 17:53
So, for instance, let's take the set of function with the domain equal to the interval (0,5) to the codomain (0,5). So, for instance, in this set of a function belongs f(x) = x, f(x) = 2x, f(x) =exp(x) f(x)=cos(x) (all defined for the domain), as all these functions maps at least one point between domain and codomain. This is a vector space.
– Tommaso Bendinelli
Dec 20 at 18:00
1
They're all part of a vector space, yes. You've selected functions that are all continuous on the domain $(0,5)$, so the full vector space could be $C((0,5))$ which is all functions continuous on that domain. All functions that can be reached used the ones you have and the axioms 1-8 in the answer are part of the vector space (so $alpha cdot (exp x)$ is in the vector space for all real $alpha$.
– postmortes
Dec 20 at 19:13
add a comment |
You should not try to "visualize" a single vector as whatever by all means, tried this ever for a five-dimensional one? We can't "visualise" such high-dimensional vectors, but we want to talk of concepts of parallelism or planes or projections (in such vector spaces). You can't "visualise" the vector $(1,2,3,4,5)$, but you may say that it's parallel to $(2,4,6,8,10)$ and that it's projection on the (not visualisable) plane spanned by $(1,0,0,0,0)$ and $(0,1,0,0,0)$ is $(1,2,0,0,0)$. And that transfer can be done with sets of functions.
Take, for example, $mathbb R^J$ (where $J$ is a non-empty set), the set of real-valued functions defined on $J$. We want consider each member, that is: each function, of $mathbb R^J$ as a single vector.
First we recall that two functions $f$ and $g$, defined on the same domain, are defined equal, iff they are pointwise equal, that is, $f=g$ iff for all $x$ from the common domain we have $f(x)=g(x)$.
From here we may define the sum of two functions $f$ and $g$, which is a function of its own, pointwise:
Define for $f,gin mathbb R^J$ their sum $f+g$ by $(f+g)(x):=f(x)+g(x)$ for all $xin J$. Furthermore we may define for any real number $c$ the new function $ccdot f$ by $(ccdot f)(x):=ccdot f(x)$.
It's easy to verify that now $mathbb R^J$ is a real vector space. (It may be infinite-dimensional, but that doesn't matter in this case.) For example, one has to verify that
$$ccdot (f+g)=ccdot f+ccdot g.$$
But that's nearly trivial since by the above definitions
$$begin{align}bigl({bf ccdot(f+g)}bigr)(x)&=ccdotbigl((f+g)(x)bigr)\
&=ccdotbigl(f(x)+g(x)bigr)\
&=ccdot f(x)+ccdot g(x)\
&=(ccdot f)(x)+(ccdot g)(x)\
&=({bf ccdot f+ccdot g})(x).end{align}
$$
To give an example, recall that for any non-zero vector $f$ of a vector space $V$ the set $g={ccdot f|cin mathbb R}$ is a straight line through the origin. Now let $J=mathbb R$, hence $V=mathbb R^{mathbb R}$ and let $f$ be a well known function defined by $f(t)=t^2$.
From this point of view the set $g={ccdot f|cin mathbb R}$ is a straight line in $V$ through the origin. Any point $p$ of $g$ is a function $p$ which is defined by $p(t)=ccdot t^2$.
By the way, the "usual" vector space $mathbb R^n=mathbb R^{{1,dots,n}}$ is nothing else as the set of functions $vec vcolon{1,dots,n}tomathbb R$, can you see this? Such a $vec v$ is determined by the values it takes for $1,dots,n$, that is by $vec v(1).dots,vec v(n)$; commonly one writes $v_k$ instead of $vec v(k)$ for $1leq kleq n$. And the notation
$$vec v=begin{pmatrix}v_1\
vdots\
v_nend{pmatrix}$$
is nothing else but an abbreviate form of the table of values that $vec v$ takes on ${1,dots, n}$.
Now take another function $vec w$ from $mathbb R^n$. From the above definitions we may compute $vec v+vec w$, namely by $(vec v+vec w)(k)=vec v(k)+vec w(k)$. Now this boils down, abbreviated, to
$$vec v+vec w=begin{pmatrix}v_1+w_1\
vdots\
v_n+w_nend{pmatrix}.$$
Thank you everyone for your answers. I really appreciate the your willingness to help me. There is only one thing that I don't get, a vector space of a functions is composed by functions (and thus cannot be visualised) or by vectors?
– Tommaso Bendinelli
Dec 20 at 17:20
@TommasoBendinelli First of all, a function is a vector (and vice versa). Thus you may transfer the notion of parallelness. e.g., to functions as well: so $f$ and $3cdot f$ are parallel functions (vectors).
– Michael Hoppe
Dec 20 at 17:27
I can't visualize it, I have always have thought a function as a mapping between the domain and the codomain. While I see a vector as (forgive but lack of formalism) an "arrow" in an n-dimensional space. How can these two things be the same?
– Tommaso Bendinelli
Dec 20 at 17:35
1
@TommasoBendinelli I was also puzzled by it when I first saw it. But in more abstract mathematics, a vector simply means an element of a vector space. Since we have seen the set of all real-valued functions form a vector space, a real-valued function is a vector. Conversely, for an old-school vector $begin{pmatrix} x_0 \ x_1end{pmatrix}$ living on the plane, it can be seen as a real-valued function $v: {0, 1} to mathbb{R}$ where $v(0) = v_0$ and $v(1) = v_1$. Hope it helps you to visualize.
– Alex Vong
Dec 20 at 19:21
Thank you! If I say let's "A" be a vector space of exponential function ($exp^{x*a}$) with $a$ constrained to be either 1 or 2, this vector space has dimension 2. Right? If I remove the constraint on $a$ at this point this the vector space has infinite dimension. Is it correct ?
– Tommaso Bendinelli
Dec 20 at 19:35
|
show 4 more comments
I'll offer a point of view that gives some concrete examples.
As people have mentioned, the only thing necessary to have a "vector space" is the ability to add objects together, and multiply them by scalars (subject to some special rules). We have this for functions that share a common domain and codomain. If we consider all functions with codomain $mathbb{R}$, and fixed codomain $D$, we get an infinite-dimensional vector space (unless $D$ is a finite set). This is what is usually called $mathbb{R}^{D}$.
Now, if we want a finite dimensional vector space, what we are looking for is a subspace of $mathbb{R}^{D}$ that can be spanned by a finite set of functions. Here span is the normal linear algebra concept, where we are allowed to take linear combinations of the functions, e.g. the span of functions $f(x)$ and $g(x)$ would look like ${ af(x) + bg(x) : a,b in mathbb{R}}$.
Some examples:
If we take the set of constant functions $f(x) = c$ for $c in mathbb{R}$, this is a 1-dimensional vector space of functions, because any such function is just $c$ times the function $f(x) = 1$.
If we take the set of polynomials of degree less than $n$, we get a vector space of dimensions $n+1$, for example the polynomials with degree less than 4 gives a 5-dimensional vector space with basis ${1,x,x^{2}, x^{3}, x^{4}}$.
If we take linear combinations of $sin{x}$ and $cos{x}$, we get a vector space of dimension 2 containing functions of the form ${asin{x}+bcos{x} : a,b in mathbb{R}}$ (it can be shown that $sin{x}$ and $cos{x}$ are not scalar multiples of each other, so are linearly independent).
I've deleted my answer—I think it would cause confusion. The question I was trying to answer was "How can a set of functions form a space anyway?" which I felt other answers were jumping past.
– timtfj
Dec 20 at 21:42
add a comment |
The notion of a vector space is abstract and it can be applied to Functions (sometimes these spaces are called Function Spaces) and these spaces are the subject of Functional Analysis.
It is important to get away from the geometric representation of a vector space - they are special cases and impossible to think about when we move into higher dimensional spaces. Instead, consider our definition of a vector space:
Let $V$ be a set which is closed under vector addition and multiplication of vectors by scalars, then we call V a vector space. i.e.
$$ forall x, y in V quad forall c,d in R $$
$$ cx+dy in V $$
When we are talking about function spaces we are talking about mappings from a set $X$ to a vector space (over a field but don't get too bugged down in this if you don't know what this means) $V$ (note that $X$ can also be a vector space, then we can consider the function space to be a linear mapping).
A more simple way of thinking about them is considering them as a collection of functions that share characteristics of their range and whose codomains' are commonly the same.
Example time!
The function space $C(mathbb R^n)$ represents all functions who are continuous in $mathbb R^n$. e.g. $f(x)=x in C(mathbb R^1 )$
A Hilbert Spaces are subspaces of $C(mathbb R^n)$ and often referred to as generalisations of Euclidean space.
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3047706%2fvector-space-of-a-function-example-for-understanding-the-concept%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
5 Answers
5
active
oldest
votes
5 Answers
5
active
oldest
votes
active
oldest
votes
active
oldest
votes
$mathbb R^D$ is the set of all functions $f:D to mathbb R.$ If we define an addition $f+g$ and a scalar multiplication $ alpha f$ in this set by
$(f+g)(x)=f(x)+g(x)$ and $( alpha f)(x)= alpha f(x)$,
then $mathbb R^D$ is a real vector space ( of functions).
add a comment |
$mathbb R^D$ is the set of all functions $f:D to mathbb R.$ If we define an addition $f+g$ and a scalar multiplication $ alpha f$ in this set by
$(f+g)(x)=f(x)+g(x)$ and $( alpha f)(x)= alpha f(x)$,
then $mathbb R^D$ is a real vector space ( of functions).
add a comment |
$mathbb R^D$ is the set of all functions $f:D to mathbb R.$ If we define an addition $f+g$ and a scalar multiplication $ alpha f$ in this set by
$(f+g)(x)=f(x)+g(x)$ and $( alpha f)(x)= alpha f(x)$,
then $mathbb R^D$ is a real vector space ( of functions).
$mathbb R^D$ is the set of all functions $f:D to mathbb R.$ If we define an addition $f+g$ and a scalar multiplication $ alpha f$ in this set by
$(f+g)(x)=f(x)+g(x)$ and $( alpha f)(x)= alpha f(x)$,
then $mathbb R^D$ is a real vector space ( of functions).
answered Dec 20 at 16:37
Fred
44.2k1645
44.2k1645
add a comment |
add a comment |
Take a collection of functions and see if you can demonstrate all the properties of a vector space using them.
A vector space requires:
- An additive identity (written $0$ in ${mathbb R}$). The function $fequiv 0$ fulfills this need.
- A scalar multiplicative identity (written $1$ in ${mathbb R}$). $1$ works here since $1cdot f = f$
- Commutativity of addition: $f+g = g+f$
- Associativity of addition: $f+(g+h) = (f+g)+h$
- Associativity of scalar multiplication: $alpha (beta f) = (alpha beta)f$
- Distributivity of scalars: $(alpha + beta)f = alpha f + beta f$
- Distributivity of scalars over vector addition: $alpha(f+g) = alpha f + alpha g$
- An additive inverse: given $f$ there exists $g$ such that $f+g = 0$. Obviously $g=-f$ satisfies this.
Can we define the addition of two functions (f + g)(x) as f(x) + g(x) for any given function?
– Tommaso Bendinelli
Dec 20 at 17:37
Yes, exactly so
– postmortes
Dec 20 at 17:53
So, for instance, let's take the set of function with the domain equal to the interval (0,5) to the codomain (0,5). So, for instance, in this set of a function belongs f(x) = x, f(x) = 2x, f(x) =exp(x) f(x)=cos(x) (all defined for the domain), as all these functions maps at least one point between domain and codomain. This is a vector space.
– Tommaso Bendinelli
Dec 20 at 18:00
1
They're all part of a vector space, yes. You've selected functions that are all continuous on the domain $(0,5)$, so the full vector space could be $C((0,5))$ which is all functions continuous on that domain. All functions that can be reached used the ones you have and the axioms 1-8 in the answer are part of the vector space (so $alpha cdot (exp x)$ is in the vector space for all real $alpha$.
– postmortes
Dec 20 at 19:13
add a comment |
Take a collection of functions and see if you can demonstrate all the properties of a vector space using them.
A vector space requires:
- An additive identity (written $0$ in ${mathbb R}$). The function $fequiv 0$ fulfills this need.
- A scalar multiplicative identity (written $1$ in ${mathbb R}$). $1$ works here since $1cdot f = f$
- Commutativity of addition: $f+g = g+f$
- Associativity of addition: $f+(g+h) = (f+g)+h$
- Associativity of scalar multiplication: $alpha (beta f) = (alpha beta)f$
- Distributivity of scalars: $(alpha + beta)f = alpha f + beta f$
- Distributivity of scalars over vector addition: $alpha(f+g) = alpha f + alpha g$
- An additive inverse: given $f$ there exists $g$ such that $f+g = 0$. Obviously $g=-f$ satisfies this.
Can we define the addition of two functions (f + g)(x) as f(x) + g(x) for any given function?
– Tommaso Bendinelli
Dec 20 at 17:37
Yes, exactly so
– postmortes
Dec 20 at 17:53
So, for instance, let's take the set of function with the domain equal to the interval (0,5) to the codomain (0,5). So, for instance, in this set of a function belongs f(x) = x, f(x) = 2x, f(x) =exp(x) f(x)=cos(x) (all defined for the domain), as all these functions maps at least one point between domain and codomain. This is a vector space.
– Tommaso Bendinelli
Dec 20 at 18:00
1
They're all part of a vector space, yes. You've selected functions that are all continuous on the domain $(0,5)$, so the full vector space could be $C((0,5))$ which is all functions continuous on that domain. All functions that can be reached used the ones you have and the axioms 1-8 in the answer are part of the vector space (so $alpha cdot (exp x)$ is in the vector space for all real $alpha$.
– postmortes
Dec 20 at 19:13
add a comment |
Take a collection of functions and see if you can demonstrate all the properties of a vector space using them.
A vector space requires:
- An additive identity (written $0$ in ${mathbb R}$). The function $fequiv 0$ fulfills this need.
- A scalar multiplicative identity (written $1$ in ${mathbb R}$). $1$ works here since $1cdot f = f$
- Commutativity of addition: $f+g = g+f$
- Associativity of addition: $f+(g+h) = (f+g)+h$
- Associativity of scalar multiplication: $alpha (beta f) = (alpha beta)f$
- Distributivity of scalars: $(alpha + beta)f = alpha f + beta f$
- Distributivity of scalars over vector addition: $alpha(f+g) = alpha f + alpha g$
- An additive inverse: given $f$ there exists $g$ such that $f+g = 0$. Obviously $g=-f$ satisfies this.
Take a collection of functions and see if you can demonstrate all the properties of a vector space using them.
A vector space requires:
- An additive identity (written $0$ in ${mathbb R}$). The function $fequiv 0$ fulfills this need.
- A scalar multiplicative identity (written $1$ in ${mathbb R}$). $1$ works here since $1cdot f = f$
- Commutativity of addition: $f+g = g+f$
- Associativity of addition: $f+(g+h) = (f+g)+h$
- Associativity of scalar multiplication: $alpha (beta f) = (alpha beta)f$
- Distributivity of scalars: $(alpha + beta)f = alpha f + beta f$
- Distributivity of scalars over vector addition: $alpha(f+g) = alpha f + alpha g$
- An additive inverse: given $f$ there exists $g$ such that $f+g = 0$. Obviously $g=-f$ satisfies this.
answered Dec 20 at 16:43
postmortes
1,78711016
1,78711016
Can we define the addition of two functions (f + g)(x) as f(x) + g(x) for any given function?
– Tommaso Bendinelli
Dec 20 at 17:37
Yes, exactly so
– postmortes
Dec 20 at 17:53
So, for instance, let's take the set of function with the domain equal to the interval (0,5) to the codomain (0,5). So, for instance, in this set of a function belongs f(x) = x, f(x) = 2x, f(x) =exp(x) f(x)=cos(x) (all defined for the domain), as all these functions maps at least one point between domain and codomain. This is a vector space.
– Tommaso Bendinelli
Dec 20 at 18:00
1
They're all part of a vector space, yes. You've selected functions that are all continuous on the domain $(0,5)$, so the full vector space could be $C((0,5))$ which is all functions continuous on that domain. All functions that can be reached used the ones you have and the axioms 1-8 in the answer are part of the vector space (so $alpha cdot (exp x)$ is in the vector space for all real $alpha$.
– postmortes
Dec 20 at 19:13
add a comment |
Can we define the addition of two functions (f + g)(x) as f(x) + g(x) for any given function?
– Tommaso Bendinelli
Dec 20 at 17:37
Yes, exactly so
– postmortes
Dec 20 at 17:53
So, for instance, let's take the set of function with the domain equal to the interval (0,5) to the codomain (0,5). So, for instance, in this set of a function belongs f(x) = x, f(x) = 2x, f(x) =exp(x) f(x)=cos(x) (all defined for the domain), as all these functions maps at least one point between domain and codomain. This is a vector space.
– Tommaso Bendinelli
Dec 20 at 18:00
1
They're all part of a vector space, yes. You've selected functions that are all continuous on the domain $(0,5)$, so the full vector space could be $C((0,5))$ which is all functions continuous on that domain. All functions that can be reached used the ones you have and the axioms 1-8 in the answer are part of the vector space (so $alpha cdot (exp x)$ is in the vector space for all real $alpha$.
– postmortes
Dec 20 at 19:13
Can we define the addition of two functions (f + g)(x) as f(x) + g(x) for any given function?
– Tommaso Bendinelli
Dec 20 at 17:37
Can we define the addition of two functions (f + g)(x) as f(x) + g(x) for any given function?
– Tommaso Bendinelli
Dec 20 at 17:37
Yes, exactly so
– postmortes
Dec 20 at 17:53
Yes, exactly so
– postmortes
Dec 20 at 17:53
So, for instance, let's take the set of function with the domain equal to the interval (0,5) to the codomain (0,5). So, for instance, in this set of a function belongs f(x) = x, f(x) = 2x, f(x) =exp(x) f(x)=cos(x) (all defined for the domain), as all these functions maps at least one point between domain and codomain. This is a vector space.
– Tommaso Bendinelli
Dec 20 at 18:00
So, for instance, let's take the set of function with the domain equal to the interval (0,5) to the codomain (0,5). So, for instance, in this set of a function belongs f(x) = x, f(x) = 2x, f(x) =exp(x) f(x)=cos(x) (all defined for the domain), as all these functions maps at least one point between domain and codomain. This is a vector space.
– Tommaso Bendinelli
Dec 20 at 18:00
1
1
They're all part of a vector space, yes. You've selected functions that are all continuous on the domain $(0,5)$, so the full vector space could be $C((0,5))$ which is all functions continuous on that domain. All functions that can be reached used the ones you have and the axioms 1-8 in the answer are part of the vector space (so $alpha cdot (exp x)$ is in the vector space for all real $alpha$.
– postmortes
Dec 20 at 19:13
They're all part of a vector space, yes. You've selected functions that are all continuous on the domain $(0,5)$, so the full vector space could be $C((0,5))$ which is all functions continuous on that domain. All functions that can be reached used the ones you have and the axioms 1-8 in the answer are part of the vector space (so $alpha cdot (exp x)$ is in the vector space for all real $alpha$.
– postmortes
Dec 20 at 19:13
add a comment |
You should not try to "visualize" a single vector as whatever by all means, tried this ever for a five-dimensional one? We can't "visualise" such high-dimensional vectors, but we want to talk of concepts of parallelism or planes or projections (in such vector spaces). You can't "visualise" the vector $(1,2,3,4,5)$, but you may say that it's parallel to $(2,4,6,8,10)$ and that it's projection on the (not visualisable) plane spanned by $(1,0,0,0,0)$ and $(0,1,0,0,0)$ is $(1,2,0,0,0)$. And that transfer can be done with sets of functions.
Take, for example, $mathbb R^J$ (where $J$ is a non-empty set), the set of real-valued functions defined on $J$. We want consider each member, that is: each function, of $mathbb R^J$ as a single vector.
First we recall that two functions $f$ and $g$, defined on the same domain, are defined equal, iff they are pointwise equal, that is, $f=g$ iff for all $x$ from the common domain we have $f(x)=g(x)$.
From here we may define the sum of two functions $f$ and $g$, which is a function of its own, pointwise:
Define for $f,gin mathbb R^J$ their sum $f+g$ by $(f+g)(x):=f(x)+g(x)$ for all $xin J$. Furthermore we may define for any real number $c$ the new function $ccdot f$ by $(ccdot f)(x):=ccdot f(x)$.
It's easy to verify that now $mathbb R^J$ is a real vector space. (It may be infinite-dimensional, but that doesn't matter in this case.) For example, one has to verify that
$$ccdot (f+g)=ccdot f+ccdot g.$$
But that's nearly trivial since by the above definitions
$$begin{align}bigl({bf ccdot(f+g)}bigr)(x)&=ccdotbigl((f+g)(x)bigr)\
&=ccdotbigl(f(x)+g(x)bigr)\
&=ccdot f(x)+ccdot g(x)\
&=(ccdot f)(x)+(ccdot g)(x)\
&=({bf ccdot f+ccdot g})(x).end{align}
$$
To give an example, recall that for any non-zero vector $f$ of a vector space $V$ the set $g={ccdot f|cin mathbb R}$ is a straight line through the origin. Now let $J=mathbb R$, hence $V=mathbb R^{mathbb R}$ and let $f$ be a well known function defined by $f(t)=t^2$.
From this point of view the set $g={ccdot f|cin mathbb R}$ is a straight line in $V$ through the origin. Any point $p$ of $g$ is a function $p$ which is defined by $p(t)=ccdot t^2$.
By the way, the "usual" vector space $mathbb R^n=mathbb R^{{1,dots,n}}$ is nothing else as the set of functions $vec vcolon{1,dots,n}tomathbb R$, can you see this? Such a $vec v$ is determined by the values it takes for $1,dots,n$, that is by $vec v(1).dots,vec v(n)$; commonly one writes $v_k$ instead of $vec v(k)$ for $1leq kleq n$. And the notation
$$vec v=begin{pmatrix}v_1\
vdots\
v_nend{pmatrix}$$
is nothing else but an abbreviate form of the table of values that $vec v$ takes on ${1,dots, n}$.
Now take another function $vec w$ from $mathbb R^n$. From the above definitions we may compute $vec v+vec w$, namely by $(vec v+vec w)(k)=vec v(k)+vec w(k)$. Now this boils down, abbreviated, to
$$vec v+vec w=begin{pmatrix}v_1+w_1\
vdots\
v_n+w_nend{pmatrix}.$$
Thank you everyone for your answers. I really appreciate the your willingness to help me. There is only one thing that I don't get, a vector space of a functions is composed by functions (and thus cannot be visualised) or by vectors?
– Tommaso Bendinelli
Dec 20 at 17:20
@TommasoBendinelli First of all, a function is a vector (and vice versa). Thus you may transfer the notion of parallelness. e.g., to functions as well: so $f$ and $3cdot f$ are parallel functions (vectors).
– Michael Hoppe
Dec 20 at 17:27
I can't visualize it, I have always have thought a function as a mapping between the domain and the codomain. While I see a vector as (forgive but lack of formalism) an "arrow" in an n-dimensional space. How can these two things be the same?
– Tommaso Bendinelli
Dec 20 at 17:35
1
@TommasoBendinelli I was also puzzled by it when I first saw it. But in more abstract mathematics, a vector simply means an element of a vector space. Since we have seen the set of all real-valued functions form a vector space, a real-valued function is a vector. Conversely, for an old-school vector $begin{pmatrix} x_0 \ x_1end{pmatrix}$ living on the plane, it can be seen as a real-valued function $v: {0, 1} to mathbb{R}$ where $v(0) = v_0$ and $v(1) = v_1$. Hope it helps you to visualize.
– Alex Vong
Dec 20 at 19:21
Thank you! If I say let's "A" be a vector space of exponential function ($exp^{x*a}$) with $a$ constrained to be either 1 or 2, this vector space has dimension 2. Right? If I remove the constraint on $a$ at this point this the vector space has infinite dimension. Is it correct ?
– Tommaso Bendinelli
Dec 20 at 19:35
|
show 4 more comments
You should not try to "visualize" a single vector as whatever by all means, tried this ever for a five-dimensional one? We can't "visualise" such high-dimensional vectors, but we want to talk of concepts of parallelism or planes or projections (in such vector spaces). You can't "visualise" the vector $(1,2,3,4,5)$, but you may say that it's parallel to $(2,4,6,8,10)$ and that it's projection on the (not visualisable) plane spanned by $(1,0,0,0,0)$ and $(0,1,0,0,0)$ is $(1,2,0,0,0)$. And that transfer can be done with sets of functions.
Take, for example, $mathbb R^J$ (where $J$ is a non-empty set), the set of real-valued functions defined on $J$. We want consider each member, that is: each function, of $mathbb R^J$ as a single vector.
First we recall that two functions $f$ and $g$, defined on the same domain, are defined equal, iff they are pointwise equal, that is, $f=g$ iff for all $x$ from the common domain we have $f(x)=g(x)$.
From here we may define the sum of two functions $f$ and $g$, which is a function of its own, pointwise:
Define for $f,gin mathbb R^J$ their sum $f+g$ by $(f+g)(x):=f(x)+g(x)$ for all $xin J$. Furthermore we may define for any real number $c$ the new function $ccdot f$ by $(ccdot f)(x):=ccdot f(x)$.
It's easy to verify that now $mathbb R^J$ is a real vector space. (It may be infinite-dimensional, but that doesn't matter in this case.) For example, one has to verify that
$$ccdot (f+g)=ccdot f+ccdot g.$$
But that's nearly trivial since by the above definitions
$$begin{align}bigl({bf ccdot(f+g)}bigr)(x)&=ccdotbigl((f+g)(x)bigr)\
&=ccdotbigl(f(x)+g(x)bigr)\
&=ccdot f(x)+ccdot g(x)\
&=(ccdot f)(x)+(ccdot g)(x)\
&=({bf ccdot f+ccdot g})(x).end{align}
$$
To give an example, recall that for any non-zero vector $f$ of a vector space $V$ the set $g={ccdot f|cin mathbb R}$ is a straight line through the origin. Now let $J=mathbb R$, hence $V=mathbb R^{mathbb R}$ and let $f$ be a well known function defined by $f(t)=t^2$.
From this point of view the set $g={ccdot f|cin mathbb R}$ is a straight line in $V$ through the origin. Any point $p$ of $g$ is a function $p$ which is defined by $p(t)=ccdot t^2$.
By the way, the "usual" vector space $mathbb R^n=mathbb R^{{1,dots,n}}$ is nothing else as the set of functions $vec vcolon{1,dots,n}tomathbb R$, can you see this? Such a $vec v$ is determined by the values it takes for $1,dots,n$, that is by $vec v(1).dots,vec v(n)$; commonly one writes $v_k$ instead of $vec v(k)$ for $1leq kleq n$. And the notation
$$vec v=begin{pmatrix}v_1\
vdots\
v_nend{pmatrix}$$
is nothing else but an abbreviate form of the table of values that $vec v$ takes on ${1,dots, n}$.
Now take another function $vec w$ from $mathbb R^n$. From the above definitions we may compute $vec v+vec w$, namely by $(vec v+vec w)(k)=vec v(k)+vec w(k)$. Now this boils down, abbreviated, to
$$vec v+vec w=begin{pmatrix}v_1+w_1\
vdots\
v_n+w_nend{pmatrix}.$$
Thank you everyone for your answers. I really appreciate the your willingness to help me. There is only one thing that I don't get, a vector space of a functions is composed by functions (and thus cannot be visualised) or by vectors?
– Tommaso Bendinelli
Dec 20 at 17:20
@TommasoBendinelli First of all, a function is a vector (and vice versa). Thus you may transfer the notion of parallelness. e.g., to functions as well: so $f$ and $3cdot f$ are parallel functions (vectors).
– Michael Hoppe
Dec 20 at 17:27
I can't visualize it, I have always have thought a function as a mapping between the domain and the codomain. While I see a vector as (forgive but lack of formalism) an "arrow" in an n-dimensional space. How can these two things be the same?
– Tommaso Bendinelli
Dec 20 at 17:35
1
@TommasoBendinelli I was also puzzled by it when I first saw it. But in more abstract mathematics, a vector simply means an element of a vector space. Since we have seen the set of all real-valued functions form a vector space, a real-valued function is a vector. Conversely, for an old-school vector $begin{pmatrix} x_0 \ x_1end{pmatrix}$ living on the plane, it can be seen as a real-valued function $v: {0, 1} to mathbb{R}$ where $v(0) = v_0$ and $v(1) = v_1$. Hope it helps you to visualize.
– Alex Vong
Dec 20 at 19:21
Thank you! If I say let's "A" be a vector space of exponential function ($exp^{x*a}$) with $a$ constrained to be either 1 or 2, this vector space has dimension 2. Right? If I remove the constraint on $a$ at this point this the vector space has infinite dimension. Is it correct ?
– Tommaso Bendinelli
Dec 20 at 19:35
|
show 4 more comments
You should not try to "visualize" a single vector as whatever by all means, tried this ever for a five-dimensional one? We can't "visualise" such high-dimensional vectors, but we want to talk of concepts of parallelism or planes or projections (in such vector spaces). You can't "visualise" the vector $(1,2,3,4,5)$, but you may say that it's parallel to $(2,4,6,8,10)$ and that it's projection on the (not visualisable) plane spanned by $(1,0,0,0,0)$ and $(0,1,0,0,0)$ is $(1,2,0,0,0)$. And that transfer can be done with sets of functions.
Take, for example, $mathbb R^J$ (where $J$ is a non-empty set), the set of real-valued functions defined on $J$. We want consider each member, that is: each function, of $mathbb R^J$ as a single vector.
First we recall that two functions $f$ and $g$, defined on the same domain, are defined equal, iff they are pointwise equal, that is, $f=g$ iff for all $x$ from the common domain we have $f(x)=g(x)$.
From here we may define the sum of two functions $f$ and $g$, which is a function of its own, pointwise:
Define for $f,gin mathbb R^J$ their sum $f+g$ by $(f+g)(x):=f(x)+g(x)$ for all $xin J$. Furthermore we may define for any real number $c$ the new function $ccdot f$ by $(ccdot f)(x):=ccdot f(x)$.
It's easy to verify that now $mathbb R^J$ is a real vector space. (It may be infinite-dimensional, but that doesn't matter in this case.) For example, one has to verify that
$$ccdot (f+g)=ccdot f+ccdot g.$$
But that's nearly trivial since by the above definitions
$$begin{align}bigl({bf ccdot(f+g)}bigr)(x)&=ccdotbigl((f+g)(x)bigr)\
&=ccdotbigl(f(x)+g(x)bigr)\
&=ccdot f(x)+ccdot g(x)\
&=(ccdot f)(x)+(ccdot g)(x)\
&=({bf ccdot f+ccdot g})(x).end{align}
$$
To give an example, recall that for any non-zero vector $f$ of a vector space $V$ the set $g={ccdot f|cin mathbb R}$ is a straight line through the origin. Now let $J=mathbb R$, hence $V=mathbb R^{mathbb R}$ and let $f$ be a well known function defined by $f(t)=t^2$.
From this point of view the set $g={ccdot f|cin mathbb R}$ is a straight line in $V$ through the origin. Any point $p$ of $g$ is a function $p$ which is defined by $p(t)=ccdot t^2$.
By the way, the "usual" vector space $mathbb R^n=mathbb R^{{1,dots,n}}$ is nothing else as the set of functions $vec vcolon{1,dots,n}tomathbb R$, can you see this? Such a $vec v$ is determined by the values it takes for $1,dots,n$, that is by $vec v(1).dots,vec v(n)$; commonly one writes $v_k$ instead of $vec v(k)$ for $1leq kleq n$. And the notation
$$vec v=begin{pmatrix}v_1\
vdots\
v_nend{pmatrix}$$
is nothing else but an abbreviate form of the table of values that $vec v$ takes on ${1,dots, n}$.
Now take another function $vec w$ from $mathbb R^n$. From the above definitions we may compute $vec v+vec w$, namely by $(vec v+vec w)(k)=vec v(k)+vec w(k)$. Now this boils down, abbreviated, to
$$vec v+vec w=begin{pmatrix}v_1+w_1\
vdots\
v_n+w_nend{pmatrix}.$$
You should not try to "visualize" a single vector as whatever by all means, tried this ever for a five-dimensional one? We can't "visualise" such high-dimensional vectors, but we want to talk of concepts of parallelism or planes or projections (in such vector spaces). You can't "visualise" the vector $(1,2,3,4,5)$, but you may say that it's parallel to $(2,4,6,8,10)$ and that it's projection on the (not visualisable) plane spanned by $(1,0,0,0,0)$ and $(0,1,0,0,0)$ is $(1,2,0,0,0)$. And that transfer can be done with sets of functions.
Take, for example, $mathbb R^J$ (where $J$ is a non-empty set), the set of real-valued functions defined on $J$. We want consider each member, that is: each function, of $mathbb R^J$ as a single vector.
First we recall that two functions $f$ and $g$, defined on the same domain, are defined equal, iff they are pointwise equal, that is, $f=g$ iff for all $x$ from the common domain we have $f(x)=g(x)$.
From here we may define the sum of two functions $f$ and $g$, which is a function of its own, pointwise:
Define for $f,gin mathbb R^J$ their sum $f+g$ by $(f+g)(x):=f(x)+g(x)$ for all $xin J$. Furthermore we may define for any real number $c$ the new function $ccdot f$ by $(ccdot f)(x):=ccdot f(x)$.
It's easy to verify that now $mathbb R^J$ is a real vector space. (It may be infinite-dimensional, but that doesn't matter in this case.) For example, one has to verify that
$$ccdot (f+g)=ccdot f+ccdot g.$$
But that's nearly trivial since by the above definitions
$$begin{align}bigl({bf ccdot(f+g)}bigr)(x)&=ccdotbigl((f+g)(x)bigr)\
&=ccdotbigl(f(x)+g(x)bigr)\
&=ccdot f(x)+ccdot g(x)\
&=(ccdot f)(x)+(ccdot g)(x)\
&=({bf ccdot f+ccdot g})(x).end{align}
$$
To give an example, recall that for any non-zero vector $f$ of a vector space $V$ the set $g={ccdot f|cin mathbb R}$ is a straight line through the origin. Now let $J=mathbb R$, hence $V=mathbb R^{mathbb R}$ and let $f$ be a well known function defined by $f(t)=t^2$.
From this point of view the set $g={ccdot f|cin mathbb R}$ is a straight line in $V$ through the origin. Any point $p$ of $g$ is a function $p$ which is defined by $p(t)=ccdot t^2$.
By the way, the "usual" vector space $mathbb R^n=mathbb R^{{1,dots,n}}$ is nothing else as the set of functions $vec vcolon{1,dots,n}tomathbb R$, can you see this? Such a $vec v$ is determined by the values it takes for $1,dots,n$, that is by $vec v(1).dots,vec v(n)$; commonly one writes $v_k$ instead of $vec v(k)$ for $1leq kleq n$. And the notation
$$vec v=begin{pmatrix}v_1\
vdots\
v_nend{pmatrix}$$
is nothing else but an abbreviate form of the table of values that $vec v$ takes on ${1,dots, n}$.
Now take another function $vec w$ from $mathbb R^n$. From the above definitions we may compute $vec v+vec w$, namely by $(vec v+vec w)(k)=vec v(k)+vec w(k)$. Now this boils down, abbreviated, to
$$vec v+vec w=begin{pmatrix}v_1+w_1\
vdots\
v_n+w_nend{pmatrix}.$$
edited Dec 20 at 19:47
answered Dec 20 at 16:49
Michael Hoppe
10.8k31834
10.8k31834
Thank you everyone for your answers. I really appreciate the your willingness to help me. There is only one thing that I don't get, a vector space of a functions is composed by functions (and thus cannot be visualised) or by vectors?
– Tommaso Bendinelli
Dec 20 at 17:20
@TommasoBendinelli First of all, a function is a vector (and vice versa). Thus you may transfer the notion of parallelness. e.g., to functions as well: so $f$ and $3cdot f$ are parallel functions (vectors).
– Michael Hoppe
Dec 20 at 17:27
I can't visualize it, I have always have thought a function as a mapping between the domain and the codomain. While I see a vector as (forgive but lack of formalism) an "arrow" in an n-dimensional space. How can these two things be the same?
– Tommaso Bendinelli
Dec 20 at 17:35
1
@TommasoBendinelli I was also puzzled by it when I first saw it. But in more abstract mathematics, a vector simply means an element of a vector space. Since we have seen the set of all real-valued functions form a vector space, a real-valued function is a vector. Conversely, for an old-school vector $begin{pmatrix} x_0 \ x_1end{pmatrix}$ living on the plane, it can be seen as a real-valued function $v: {0, 1} to mathbb{R}$ where $v(0) = v_0$ and $v(1) = v_1$. Hope it helps you to visualize.
– Alex Vong
Dec 20 at 19:21
Thank you! If I say let's "A" be a vector space of exponential function ($exp^{x*a}$) with $a$ constrained to be either 1 or 2, this vector space has dimension 2. Right? If I remove the constraint on $a$ at this point this the vector space has infinite dimension. Is it correct ?
– Tommaso Bendinelli
Dec 20 at 19:35
|
show 4 more comments
Thank you everyone for your answers. I really appreciate the your willingness to help me. There is only one thing that I don't get, a vector space of a functions is composed by functions (and thus cannot be visualised) or by vectors?
– Tommaso Bendinelli
Dec 20 at 17:20
@TommasoBendinelli First of all, a function is a vector (and vice versa). Thus you may transfer the notion of parallelness. e.g., to functions as well: so $f$ and $3cdot f$ are parallel functions (vectors).
– Michael Hoppe
Dec 20 at 17:27
I can't visualize it, I have always have thought a function as a mapping between the domain and the codomain. While I see a vector as (forgive but lack of formalism) an "arrow" in an n-dimensional space. How can these two things be the same?
– Tommaso Bendinelli
Dec 20 at 17:35
1
@TommasoBendinelli I was also puzzled by it when I first saw it. But in more abstract mathematics, a vector simply means an element of a vector space. Since we have seen the set of all real-valued functions form a vector space, a real-valued function is a vector. Conversely, for an old-school vector $begin{pmatrix} x_0 \ x_1end{pmatrix}$ living on the plane, it can be seen as a real-valued function $v: {0, 1} to mathbb{R}$ where $v(0) = v_0$ and $v(1) = v_1$. Hope it helps you to visualize.
– Alex Vong
Dec 20 at 19:21
Thank you! If I say let's "A" be a vector space of exponential function ($exp^{x*a}$) with $a$ constrained to be either 1 or 2, this vector space has dimension 2. Right? If I remove the constraint on $a$ at this point this the vector space has infinite dimension. Is it correct ?
– Tommaso Bendinelli
Dec 20 at 19:35
Thank you everyone for your answers. I really appreciate the your willingness to help me. There is only one thing that I don't get, a vector space of a functions is composed by functions (and thus cannot be visualised) or by vectors?
– Tommaso Bendinelli
Dec 20 at 17:20
Thank you everyone for your answers. I really appreciate the your willingness to help me. There is only one thing that I don't get, a vector space of a functions is composed by functions (and thus cannot be visualised) or by vectors?
– Tommaso Bendinelli
Dec 20 at 17:20
@TommasoBendinelli First of all, a function is a vector (and vice versa). Thus you may transfer the notion of parallelness. e.g., to functions as well: so $f$ and $3cdot f$ are parallel functions (vectors).
– Michael Hoppe
Dec 20 at 17:27
@TommasoBendinelli First of all, a function is a vector (and vice versa). Thus you may transfer the notion of parallelness. e.g., to functions as well: so $f$ and $3cdot f$ are parallel functions (vectors).
– Michael Hoppe
Dec 20 at 17:27
I can't visualize it, I have always have thought a function as a mapping between the domain and the codomain. While I see a vector as (forgive but lack of formalism) an "arrow" in an n-dimensional space. How can these two things be the same?
– Tommaso Bendinelli
Dec 20 at 17:35
I can't visualize it, I have always have thought a function as a mapping between the domain and the codomain. While I see a vector as (forgive but lack of formalism) an "arrow" in an n-dimensional space. How can these two things be the same?
– Tommaso Bendinelli
Dec 20 at 17:35
1
1
@TommasoBendinelli I was also puzzled by it when I first saw it. But in more abstract mathematics, a vector simply means an element of a vector space. Since we have seen the set of all real-valued functions form a vector space, a real-valued function is a vector. Conversely, for an old-school vector $begin{pmatrix} x_0 \ x_1end{pmatrix}$ living on the plane, it can be seen as a real-valued function $v: {0, 1} to mathbb{R}$ where $v(0) = v_0$ and $v(1) = v_1$. Hope it helps you to visualize.
– Alex Vong
Dec 20 at 19:21
@TommasoBendinelli I was also puzzled by it when I first saw it. But in more abstract mathematics, a vector simply means an element of a vector space. Since we have seen the set of all real-valued functions form a vector space, a real-valued function is a vector. Conversely, for an old-school vector $begin{pmatrix} x_0 \ x_1end{pmatrix}$ living on the plane, it can be seen as a real-valued function $v: {0, 1} to mathbb{R}$ where $v(0) = v_0$ and $v(1) = v_1$. Hope it helps you to visualize.
– Alex Vong
Dec 20 at 19:21
Thank you! If I say let's "A" be a vector space of exponential function ($exp^{x*a}$) with $a$ constrained to be either 1 or 2, this vector space has dimension 2. Right? If I remove the constraint on $a$ at this point this the vector space has infinite dimension. Is it correct ?
– Tommaso Bendinelli
Dec 20 at 19:35
Thank you! If I say let's "A" be a vector space of exponential function ($exp^{x*a}$) with $a$ constrained to be either 1 or 2, this vector space has dimension 2. Right? If I remove the constraint on $a$ at this point this the vector space has infinite dimension. Is it correct ?
– Tommaso Bendinelli
Dec 20 at 19:35
|
show 4 more comments
I'll offer a point of view that gives some concrete examples.
As people have mentioned, the only thing necessary to have a "vector space" is the ability to add objects together, and multiply them by scalars (subject to some special rules). We have this for functions that share a common domain and codomain. If we consider all functions with codomain $mathbb{R}$, and fixed codomain $D$, we get an infinite-dimensional vector space (unless $D$ is a finite set). This is what is usually called $mathbb{R}^{D}$.
Now, if we want a finite dimensional vector space, what we are looking for is a subspace of $mathbb{R}^{D}$ that can be spanned by a finite set of functions. Here span is the normal linear algebra concept, where we are allowed to take linear combinations of the functions, e.g. the span of functions $f(x)$ and $g(x)$ would look like ${ af(x) + bg(x) : a,b in mathbb{R}}$.
Some examples:
If we take the set of constant functions $f(x) = c$ for $c in mathbb{R}$, this is a 1-dimensional vector space of functions, because any such function is just $c$ times the function $f(x) = 1$.
If we take the set of polynomials of degree less than $n$, we get a vector space of dimensions $n+1$, for example the polynomials with degree less than 4 gives a 5-dimensional vector space with basis ${1,x,x^{2}, x^{3}, x^{4}}$.
If we take linear combinations of $sin{x}$ and $cos{x}$, we get a vector space of dimension 2 containing functions of the form ${asin{x}+bcos{x} : a,b in mathbb{R}}$ (it can be shown that $sin{x}$ and $cos{x}$ are not scalar multiples of each other, so are linearly independent).
I've deleted my answer—I think it would cause confusion. The question I was trying to answer was "How can a set of functions form a space anyway?" which I felt other answers were jumping past.
– timtfj
Dec 20 at 21:42
add a comment |
I'll offer a point of view that gives some concrete examples.
As people have mentioned, the only thing necessary to have a "vector space" is the ability to add objects together, and multiply them by scalars (subject to some special rules). We have this for functions that share a common domain and codomain. If we consider all functions with codomain $mathbb{R}$, and fixed codomain $D$, we get an infinite-dimensional vector space (unless $D$ is a finite set). This is what is usually called $mathbb{R}^{D}$.
Now, if we want a finite dimensional vector space, what we are looking for is a subspace of $mathbb{R}^{D}$ that can be spanned by a finite set of functions. Here span is the normal linear algebra concept, where we are allowed to take linear combinations of the functions, e.g. the span of functions $f(x)$ and $g(x)$ would look like ${ af(x) + bg(x) : a,b in mathbb{R}}$.
Some examples:
If we take the set of constant functions $f(x) = c$ for $c in mathbb{R}$, this is a 1-dimensional vector space of functions, because any such function is just $c$ times the function $f(x) = 1$.
If we take the set of polynomials of degree less than $n$, we get a vector space of dimensions $n+1$, for example the polynomials with degree less than 4 gives a 5-dimensional vector space with basis ${1,x,x^{2}, x^{3}, x^{4}}$.
If we take linear combinations of $sin{x}$ and $cos{x}$, we get a vector space of dimension 2 containing functions of the form ${asin{x}+bcos{x} : a,b in mathbb{R}}$ (it can be shown that $sin{x}$ and $cos{x}$ are not scalar multiples of each other, so are linearly independent).
I've deleted my answer—I think it would cause confusion. The question I was trying to answer was "How can a set of functions form a space anyway?" which I felt other answers were jumping past.
– timtfj
Dec 20 at 21:42
add a comment |
I'll offer a point of view that gives some concrete examples.
As people have mentioned, the only thing necessary to have a "vector space" is the ability to add objects together, and multiply them by scalars (subject to some special rules). We have this for functions that share a common domain and codomain. If we consider all functions with codomain $mathbb{R}$, and fixed codomain $D$, we get an infinite-dimensional vector space (unless $D$ is a finite set). This is what is usually called $mathbb{R}^{D}$.
Now, if we want a finite dimensional vector space, what we are looking for is a subspace of $mathbb{R}^{D}$ that can be spanned by a finite set of functions. Here span is the normal linear algebra concept, where we are allowed to take linear combinations of the functions, e.g. the span of functions $f(x)$ and $g(x)$ would look like ${ af(x) + bg(x) : a,b in mathbb{R}}$.
Some examples:
If we take the set of constant functions $f(x) = c$ for $c in mathbb{R}$, this is a 1-dimensional vector space of functions, because any such function is just $c$ times the function $f(x) = 1$.
If we take the set of polynomials of degree less than $n$, we get a vector space of dimensions $n+1$, for example the polynomials with degree less than 4 gives a 5-dimensional vector space with basis ${1,x,x^{2}, x^{3}, x^{4}}$.
If we take linear combinations of $sin{x}$ and $cos{x}$, we get a vector space of dimension 2 containing functions of the form ${asin{x}+bcos{x} : a,b in mathbb{R}}$ (it can be shown that $sin{x}$ and $cos{x}$ are not scalar multiples of each other, so are linearly independent).
I'll offer a point of view that gives some concrete examples.
As people have mentioned, the only thing necessary to have a "vector space" is the ability to add objects together, and multiply them by scalars (subject to some special rules). We have this for functions that share a common domain and codomain. If we consider all functions with codomain $mathbb{R}$, and fixed codomain $D$, we get an infinite-dimensional vector space (unless $D$ is a finite set). This is what is usually called $mathbb{R}^{D}$.
Now, if we want a finite dimensional vector space, what we are looking for is a subspace of $mathbb{R}^{D}$ that can be spanned by a finite set of functions. Here span is the normal linear algebra concept, where we are allowed to take linear combinations of the functions, e.g. the span of functions $f(x)$ and $g(x)$ would look like ${ af(x) + bg(x) : a,b in mathbb{R}}$.
Some examples:
If we take the set of constant functions $f(x) = c$ for $c in mathbb{R}$, this is a 1-dimensional vector space of functions, because any such function is just $c$ times the function $f(x) = 1$.
If we take the set of polynomials of degree less than $n$, we get a vector space of dimensions $n+1$, for example the polynomials with degree less than 4 gives a 5-dimensional vector space with basis ${1,x,x^{2}, x^{3}, x^{4}}$.
If we take linear combinations of $sin{x}$ and $cos{x}$, we get a vector space of dimension 2 containing functions of the form ${asin{x}+bcos{x} : a,b in mathbb{R}}$ (it can be shown that $sin{x}$ and $cos{x}$ are not scalar multiples of each other, so are linearly independent).
edited Dec 20 at 21:39
answered Dec 20 at 20:04
Morgan Rodgers
9,57021439
9,57021439
I've deleted my answer—I think it would cause confusion. The question I was trying to answer was "How can a set of functions form a space anyway?" which I felt other answers were jumping past.
– timtfj
Dec 20 at 21:42
add a comment |
I've deleted my answer—I think it would cause confusion. The question I was trying to answer was "How can a set of functions form a space anyway?" which I felt other answers were jumping past.
– timtfj
Dec 20 at 21:42
I've deleted my answer—I think it would cause confusion. The question I was trying to answer was "How can a set of functions form a space anyway?" which I felt other answers were jumping past.
– timtfj
Dec 20 at 21:42
I've deleted my answer—I think it would cause confusion. The question I was trying to answer was "How can a set of functions form a space anyway?" which I felt other answers were jumping past.
– timtfj
Dec 20 at 21:42
add a comment |
The notion of a vector space is abstract and it can be applied to Functions (sometimes these spaces are called Function Spaces) and these spaces are the subject of Functional Analysis.
It is important to get away from the geometric representation of a vector space - they are special cases and impossible to think about when we move into higher dimensional spaces. Instead, consider our definition of a vector space:
Let $V$ be a set which is closed under vector addition and multiplication of vectors by scalars, then we call V a vector space. i.e.
$$ forall x, y in V quad forall c,d in R $$
$$ cx+dy in V $$
When we are talking about function spaces we are talking about mappings from a set $X$ to a vector space (over a field but don't get too bugged down in this if you don't know what this means) $V$ (note that $X$ can also be a vector space, then we can consider the function space to be a linear mapping).
A more simple way of thinking about them is considering them as a collection of functions that share characteristics of their range and whose codomains' are commonly the same.
Example time!
The function space $C(mathbb R^n)$ represents all functions who are continuous in $mathbb R^n$. e.g. $f(x)=x in C(mathbb R^1 )$
A Hilbert Spaces are subspaces of $C(mathbb R^n)$ and often referred to as generalisations of Euclidean space.
add a comment |
The notion of a vector space is abstract and it can be applied to Functions (sometimes these spaces are called Function Spaces) and these spaces are the subject of Functional Analysis.
It is important to get away from the geometric representation of a vector space - they are special cases and impossible to think about when we move into higher dimensional spaces. Instead, consider our definition of a vector space:
Let $V$ be a set which is closed under vector addition and multiplication of vectors by scalars, then we call V a vector space. i.e.
$$ forall x, y in V quad forall c,d in R $$
$$ cx+dy in V $$
When we are talking about function spaces we are talking about mappings from a set $X$ to a vector space (over a field but don't get too bugged down in this if you don't know what this means) $V$ (note that $X$ can also be a vector space, then we can consider the function space to be a linear mapping).
A more simple way of thinking about them is considering them as a collection of functions that share characteristics of their range and whose codomains' are commonly the same.
Example time!
The function space $C(mathbb R^n)$ represents all functions who are continuous in $mathbb R^n$. e.g. $f(x)=x in C(mathbb R^1 )$
A Hilbert Spaces are subspaces of $C(mathbb R^n)$ and often referred to as generalisations of Euclidean space.
add a comment |
The notion of a vector space is abstract and it can be applied to Functions (sometimes these spaces are called Function Spaces) and these spaces are the subject of Functional Analysis.
It is important to get away from the geometric representation of a vector space - they are special cases and impossible to think about when we move into higher dimensional spaces. Instead, consider our definition of a vector space:
Let $V$ be a set which is closed under vector addition and multiplication of vectors by scalars, then we call V a vector space. i.e.
$$ forall x, y in V quad forall c,d in R $$
$$ cx+dy in V $$
When we are talking about function spaces we are talking about mappings from a set $X$ to a vector space (over a field but don't get too bugged down in this if you don't know what this means) $V$ (note that $X$ can also be a vector space, then we can consider the function space to be a linear mapping).
A more simple way of thinking about them is considering them as a collection of functions that share characteristics of their range and whose codomains' are commonly the same.
Example time!
The function space $C(mathbb R^n)$ represents all functions who are continuous in $mathbb R^n$. e.g. $f(x)=x in C(mathbb R^1 )$
A Hilbert Spaces are subspaces of $C(mathbb R^n)$ and often referred to as generalisations of Euclidean space.
The notion of a vector space is abstract and it can be applied to Functions (sometimes these spaces are called Function Spaces) and these spaces are the subject of Functional Analysis.
It is important to get away from the geometric representation of a vector space - they are special cases and impossible to think about when we move into higher dimensional spaces. Instead, consider our definition of a vector space:
Let $V$ be a set which is closed under vector addition and multiplication of vectors by scalars, then we call V a vector space. i.e.
$$ forall x, y in V quad forall c,d in R $$
$$ cx+dy in V $$
When we are talking about function spaces we are talking about mappings from a set $X$ to a vector space (over a field but don't get too bugged down in this if you don't know what this means) $V$ (note that $X$ can also be a vector space, then we can consider the function space to be a linear mapping).
A more simple way of thinking about them is considering them as a collection of functions that share characteristics of their range and whose codomains' are commonly the same.
Example time!
The function space $C(mathbb R^n)$ represents all functions who are continuous in $mathbb R^n$. e.g. $f(x)=x in C(mathbb R^1 )$
A Hilbert Spaces are subspaces of $C(mathbb R^n)$ and often referred to as generalisations of Euclidean space.
answered Dec 20 at 16:48
Joel Biffin
835
835
add a comment |
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3047706%2fvector-space-of-a-function-example-for-understanding-the-concept%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Do you know the axioms for a vector space? For example, do you understand why the set of polynomials with coefficients from $mathbb R$ is a vector space over $mathbb R$?
– John Douma
Dec 20 at 16:36
You just need to define addition of two functions and multiplication of a function by a real number. What could that look like, do you think?
– fkraiem
Dec 20 at 16:37
Don't you mean infinite dimensional?
– Math_QED
Dec 20 at 16:45
No, it's finite dimensional
– Tommaso Bendinelli
Dec 20 at 17:40