Why it is hard to simulate a quantum device by a classical devices?
I was recently watching a talk by Urmila Mahadev on "Classical Verification of Quantum Computations" (see this). I am not new to quantum computation just have a familiarity with the qubit and some part of quantum mechanics. I even did not get the meaning of the simulation. I am guessing it means that given an encoding of a quantum device with input. The classical device will take this encoding and get the results. The only thing which I have understood is that there may be many states in which quantum device can go through on a particular input.
Question: Why it is hard to simulate a quantum device by a classical device?
Please note that I am not sure what it means by " hard ". Is it time-wise or space-wise? The meaning of " simulation " is also not clear to me.
algorithm
New contributor
add a comment |
I was recently watching a talk by Urmila Mahadev on "Classical Verification of Quantum Computations" (see this). I am not new to quantum computation just have a familiarity with the qubit and some part of quantum mechanics. I even did not get the meaning of the simulation. I am guessing it means that given an encoding of a quantum device with input. The classical device will take this encoding and get the results. The only thing which I have understood is that there may be many states in which quantum device can go through on a particular input.
Question: Why it is hard to simulate a quantum device by a classical device?
Please note that I am not sure what it means by " hard ". Is it time-wise or space-wise? The meaning of " simulation " is also not clear to me.
algorithm
New contributor
1
If it were possible, would we need quantum devices at all?
– Mahathi Vempati
Dec 19 at 7:13
add a comment |
I was recently watching a talk by Urmila Mahadev on "Classical Verification of Quantum Computations" (see this). I am not new to quantum computation just have a familiarity with the qubit and some part of quantum mechanics. I even did not get the meaning of the simulation. I am guessing it means that given an encoding of a quantum device with input. The classical device will take this encoding and get the results. The only thing which I have understood is that there may be many states in which quantum device can go through on a particular input.
Question: Why it is hard to simulate a quantum device by a classical device?
Please note that I am not sure what it means by " hard ". Is it time-wise or space-wise? The meaning of " simulation " is also not clear to me.
algorithm
New contributor
I was recently watching a talk by Urmila Mahadev on "Classical Verification of Quantum Computations" (see this). I am not new to quantum computation just have a familiarity with the qubit and some part of quantum mechanics. I even did not get the meaning of the simulation. I am guessing it means that given an encoding of a quantum device with input. The classical device will take this encoding and get the results. The only thing which I have understood is that there may be many states in which quantum device can go through on a particular input.
Question: Why it is hard to simulate a quantum device by a classical device?
Please note that I am not sure what it means by " hard ". Is it time-wise or space-wise? The meaning of " simulation " is also not clear to me.
algorithm
algorithm
New contributor
New contributor
New contributor
asked Dec 19 at 5:24
new_bee
1261
1261
New contributor
New contributor
1
If it were possible, would we need quantum devices at all?
– Mahathi Vempati
Dec 19 at 7:13
add a comment |
1
If it were possible, would we need quantum devices at all?
– Mahathi Vempati
Dec 19 at 7:13
1
1
If it were possible, would we need quantum devices at all?
– Mahathi Vempati
Dec 19 at 7:13
If it were possible, would we need quantum devices at all?
– Mahathi Vempati
Dec 19 at 7:13
add a comment |
2 Answers
2
active
oldest
votes
The first thing to understand is how quantum operations (i.e. quantum gates) and quantum states are mathematically represented:
- Quantum operations on $n$ qubits are unitary matrices of size $2^n times 2^n$.
- Quantum states on $n$ qubits are complex vectors of size $2^n$.
If you are not 100% sure of theses numbers, you can read more about it in:
- (Almost?) every book on quantum computing. For example, Nielsen & Chuang wrote about that in the very beginning of their book.
- These exponents in base $2$ are due to the way states are composed with tensor product. You can read a little bit more about it here.
Once you are convinced that the numbers I wrote above are valid, you have your answer:
Simulating a quantum device by a classical device is limited by the available RAM memory on the classical computer.
To elaborate a little more, think about how a classical computer would simulate a quantum one. One thing that the classical computer will definitely have to store is the current quantum state of the quantum machine it is simulating. As I wrote in the beginning of my answer, a quantum state is a vector of $2^n$ complex numbers. Now let's compute (in the following, byte == octet):
- The size of a floating-point number is 4 or 8 octets (depending on the precision, i.e.
float
ordouble
, and assuming a non-exotic classical computer). - A complex number is represented by 2 floating-point numbers: one for the real-part and the second for the imaginary-part. So it needs 8 or 16 octets.
- The quantum state needs $2^n$ complex numbers, i.e. $f_{text{simple}}(n) = 2^{3+n}$ octets if you use simple precision or $f_{text{double}}(n) =2^{4+n}$ octets if you use double precision.
Say you want to simulate a n-qubit quantum computer with your classical computer:
- For $n = 10$ you will need at least $f_{text{simple}}(10) = 2^{13} = 8192, text{o} = 8, text{kio}$. Every classical computer should be able to do this.
- For $n = 20$ you will need at least $f_{text{simple}}(20) = 2^{23} = 8388608, text{o} = 8, text{Mio}$. Every classical computer should be able to do this.
- For $n = 30$ you will need at least $f_{text{simple}}(30) = 2^{33} = 8589934592, text{o} = 8, text{Gio}$. A publicly-accessible laptop is capable of doing it, but old computer may not have a sufficient amount of RAM.
- For $n = 40$ you will need at least $f_{text{simple}}(40) = 2^{43} = 8796093022208, text{o} = 8, text{Tio}$. This is definitely out of reach for publicly-accessible things, you will need access to a computing server.
- For $n = 50$ you will need at least $f_{text{simple}}(50) = 2^{53} = 9007199254740992, text{o} = 8, text{Pio}$. Even Summit, the TOP 1 computer (in terms of FLOPS), cannot simulate this as it "only" has $2.8, text{Pio}$ of RAM.
Of course some clever simulation algorithms are capable of using the specific structure of some quantum programs in order to reduce the needed amount of memory. But for a generic quantum program, this is the quantity of RAM you will need.
Note that I did not speak about computing power. The cost in terms of floating-point operations is generally not a limitation because most of the quantum circuits are a succession of sparse quantum operations (i.e. they are represented by a sparse matrix) and matrix-vector multiplication with a sparse matrix are quite cheap (depending on the sparseness of the matrix).
Nevertheless, note that you may have a $1$-qubit quantum program that contains $10^{30}$ quantum gates. In this case, the simulation algorithm will be time-wise limited, not memory-wise.
Nitpick: I think you mean "single precision" not "simple precision". It's also perfectly possible to have different sizes of representation, and to swap working data between RAM and cheaper storage (computers do it all the time; it will just be slower than holding everything in RAM at once). It might make more sense to just point out that 2^50 bytes = 1 pebibyte, so even with a very small representation of each number, and a very large working store, it will be impractical to work with a simulation at that scale.
– IMSoP
Dec 19 at 11:27
add a comment |
The simulation of a quantum computation (some people choose to use the term 'emulation' in this context to disambiguate from a different type of simulation) is when one tries to recreate the calculation that you want a quantum computer to perform, but on a classical computer.
When you're simulating a particular algorithm, there are many different problem instances with problem sizes $n$ (this is usually the number of bits required to specify the problem instance). We say a problem is hard if the time that it takes to run grows quicker than any polynomial in $n$.
Now, it must be emphasised that we don't know that simulating a quantum computer is hard. It's just that we don't know how to do it. And we've tried quite hard. For example, we believe that quantum computers can perform some classical computations that are hard at least as strongly as we believe that there's no efficient classical algorithm for factoring large composite integers (because there's a quantum algorithm that achieves that in polynomial time).
If we could prove that classical computers can simulate quantum computers, then there wouldn't be nearly so much interest in building a quantum computer. That said, simulation is a polynomial overhead equivalence. Quantum computers could still be much faster, which might be desirable in some contexts.
So, your question effectively boils down to "where do quantum computers get their power from"? Variations on this theme have been asked a number of times already on this site. The way that I like to think about it is to recall that classical computers, no matter how complex, are built out of the same fundamental set of gates (indeed, one gate such as NAND is sufficient). If somebody suddenly comes along with an extra gate that cannot be built out of the existing gates, it suddenly gives you the potential to use this gate to improve existing algorithms. Sometimes it'll help, sometimes it won't.
I would just like to point out that one aspect which is not the source of power is the exponential state space. Probabilistic classical computations also have an exponentially large state space, and yet we can still perform them. (Of course, the difference is about how we deal with probabilities. Quantum probabilities can interfere, which means that we have to keep all the paths "alive" as we simulate, rather than just sampling individual paths. But this is a much more subtle issue.)
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "694"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
new_bee is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fquantumcomputing.stackexchange.com%2fquestions%2f5005%2fwhy-it-is-hard-to-simulate-a-quantum-device-by-a-classical-devices%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
The first thing to understand is how quantum operations (i.e. quantum gates) and quantum states are mathematically represented:
- Quantum operations on $n$ qubits are unitary matrices of size $2^n times 2^n$.
- Quantum states on $n$ qubits are complex vectors of size $2^n$.
If you are not 100% sure of theses numbers, you can read more about it in:
- (Almost?) every book on quantum computing. For example, Nielsen & Chuang wrote about that in the very beginning of their book.
- These exponents in base $2$ are due to the way states are composed with tensor product. You can read a little bit more about it here.
Once you are convinced that the numbers I wrote above are valid, you have your answer:
Simulating a quantum device by a classical device is limited by the available RAM memory on the classical computer.
To elaborate a little more, think about how a classical computer would simulate a quantum one. One thing that the classical computer will definitely have to store is the current quantum state of the quantum machine it is simulating. As I wrote in the beginning of my answer, a quantum state is a vector of $2^n$ complex numbers. Now let's compute (in the following, byte == octet):
- The size of a floating-point number is 4 or 8 octets (depending on the precision, i.e.
float
ordouble
, and assuming a non-exotic classical computer). - A complex number is represented by 2 floating-point numbers: one for the real-part and the second for the imaginary-part. So it needs 8 or 16 octets.
- The quantum state needs $2^n$ complex numbers, i.e. $f_{text{simple}}(n) = 2^{3+n}$ octets if you use simple precision or $f_{text{double}}(n) =2^{4+n}$ octets if you use double precision.
Say you want to simulate a n-qubit quantum computer with your classical computer:
- For $n = 10$ you will need at least $f_{text{simple}}(10) = 2^{13} = 8192, text{o} = 8, text{kio}$. Every classical computer should be able to do this.
- For $n = 20$ you will need at least $f_{text{simple}}(20) = 2^{23} = 8388608, text{o} = 8, text{Mio}$. Every classical computer should be able to do this.
- For $n = 30$ you will need at least $f_{text{simple}}(30) = 2^{33} = 8589934592, text{o} = 8, text{Gio}$. A publicly-accessible laptop is capable of doing it, but old computer may not have a sufficient amount of RAM.
- For $n = 40$ you will need at least $f_{text{simple}}(40) = 2^{43} = 8796093022208, text{o} = 8, text{Tio}$. This is definitely out of reach for publicly-accessible things, you will need access to a computing server.
- For $n = 50$ you will need at least $f_{text{simple}}(50) = 2^{53} = 9007199254740992, text{o} = 8, text{Pio}$. Even Summit, the TOP 1 computer (in terms of FLOPS), cannot simulate this as it "only" has $2.8, text{Pio}$ of RAM.
Of course some clever simulation algorithms are capable of using the specific structure of some quantum programs in order to reduce the needed amount of memory. But for a generic quantum program, this is the quantity of RAM you will need.
Note that I did not speak about computing power. The cost in terms of floating-point operations is generally not a limitation because most of the quantum circuits are a succession of sparse quantum operations (i.e. they are represented by a sparse matrix) and matrix-vector multiplication with a sparse matrix are quite cheap (depending on the sparseness of the matrix).
Nevertheless, note that you may have a $1$-qubit quantum program that contains $10^{30}$ quantum gates. In this case, the simulation algorithm will be time-wise limited, not memory-wise.
Nitpick: I think you mean "single precision" not "simple precision". It's also perfectly possible to have different sizes of representation, and to swap working data between RAM and cheaper storage (computers do it all the time; it will just be slower than holding everything in RAM at once). It might make more sense to just point out that 2^50 bytes = 1 pebibyte, so even with a very small representation of each number, and a very large working store, it will be impractical to work with a simulation at that scale.
– IMSoP
Dec 19 at 11:27
add a comment |
The first thing to understand is how quantum operations (i.e. quantum gates) and quantum states are mathematically represented:
- Quantum operations on $n$ qubits are unitary matrices of size $2^n times 2^n$.
- Quantum states on $n$ qubits are complex vectors of size $2^n$.
If you are not 100% sure of theses numbers, you can read more about it in:
- (Almost?) every book on quantum computing. For example, Nielsen & Chuang wrote about that in the very beginning of their book.
- These exponents in base $2$ are due to the way states are composed with tensor product. You can read a little bit more about it here.
Once you are convinced that the numbers I wrote above are valid, you have your answer:
Simulating a quantum device by a classical device is limited by the available RAM memory on the classical computer.
To elaborate a little more, think about how a classical computer would simulate a quantum one. One thing that the classical computer will definitely have to store is the current quantum state of the quantum machine it is simulating. As I wrote in the beginning of my answer, a quantum state is a vector of $2^n$ complex numbers. Now let's compute (in the following, byte == octet):
- The size of a floating-point number is 4 or 8 octets (depending on the precision, i.e.
float
ordouble
, and assuming a non-exotic classical computer). - A complex number is represented by 2 floating-point numbers: one for the real-part and the second for the imaginary-part. So it needs 8 or 16 octets.
- The quantum state needs $2^n$ complex numbers, i.e. $f_{text{simple}}(n) = 2^{3+n}$ octets if you use simple precision or $f_{text{double}}(n) =2^{4+n}$ octets if you use double precision.
Say you want to simulate a n-qubit quantum computer with your classical computer:
- For $n = 10$ you will need at least $f_{text{simple}}(10) = 2^{13} = 8192, text{o} = 8, text{kio}$. Every classical computer should be able to do this.
- For $n = 20$ you will need at least $f_{text{simple}}(20) = 2^{23} = 8388608, text{o} = 8, text{Mio}$. Every classical computer should be able to do this.
- For $n = 30$ you will need at least $f_{text{simple}}(30) = 2^{33} = 8589934592, text{o} = 8, text{Gio}$. A publicly-accessible laptop is capable of doing it, but old computer may not have a sufficient amount of RAM.
- For $n = 40$ you will need at least $f_{text{simple}}(40) = 2^{43} = 8796093022208, text{o} = 8, text{Tio}$. This is definitely out of reach for publicly-accessible things, you will need access to a computing server.
- For $n = 50$ you will need at least $f_{text{simple}}(50) = 2^{53} = 9007199254740992, text{o} = 8, text{Pio}$. Even Summit, the TOP 1 computer (in terms of FLOPS), cannot simulate this as it "only" has $2.8, text{Pio}$ of RAM.
Of course some clever simulation algorithms are capable of using the specific structure of some quantum programs in order to reduce the needed amount of memory. But for a generic quantum program, this is the quantity of RAM you will need.
Note that I did not speak about computing power. The cost in terms of floating-point operations is generally not a limitation because most of the quantum circuits are a succession of sparse quantum operations (i.e. they are represented by a sparse matrix) and matrix-vector multiplication with a sparse matrix are quite cheap (depending on the sparseness of the matrix).
Nevertheless, note that you may have a $1$-qubit quantum program that contains $10^{30}$ quantum gates. In this case, the simulation algorithm will be time-wise limited, not memory-wise.
Nitpick: I think you mean "single precision" not "simple precision". It's also perfectly possible to have different sizes of representation, and to swap working data between RAM and cheaper storage (computers do it all the time; it will just be slower than holding everything in RAM at once). It might make more sense to just point out that 2^50 bytes = 1 pebibyte, so even with a very small representation of each number, and a very large working store, it will be impractical to work with a simulation at that scale.
– IMSoP
Dec 19 at 11:27
add a comment |
The first thing to understand is how quantum operations (i.e. quantum gates) and quantum states are mathematically represented:
- Quantum operations on $n$ qubits are unitary matrices of size $2^n times 2^n$.
- Quantum states on $n$ qubits are complex vectors of size $2^n$.
If you are not 100% sure of theses numbers, you can read more about it in:
- (Almost?) every book on quantum computing. For example, Nielsen & Chuang wrote about that in the very beginning of their book.
- These exponents in base $2$ are due to the way states are composed with tensor product. You can read a little bit more about it here.
Once you are convinced that the numbers I wrote above are valid, you have your answer:
Simulating a quantum device by a classical device is limited by the available RAM memory on the classical computer.
To elaborate a little more, think about how a classical computer would simulate a quantum one. One thing that the classical computer will definitely have to store is the current quantum state of the quantum machine it is simulating. As I wrote in the beginning of my answer, a quantum state is a vector of $2^n$ complex numbers. Now let's compute (in the following, byte == octet):
- The size of a floating-point number is 4 or 8 octets (depending on the precision, i.e.
float
ordouble
, and assuming a non-exotic classical computer). - A complex number is represented by 2 floating-point numbers: one for the real-part and the second for the imaginary-part. So it needs 8 or 16 octets.
- The quantum state needs $2^n$ complex numbers, i.e. $f_{text{simple}}(n) = 2^{3+n}$ octets if you use simple precision or $f_{text{double}}(n) =2^{4+n}$ octets if you use double precision.
Say you want to simulate a n-qubit quantum computer with your classical computer:
- For $n = 10$ you will need at least $f_{text{simple}}(10) = 2^{13} = 8192, text{o} = 8, text{kio}$. Every classical computer should be able to do this.
- For $n = 20$ you will need at least $f_{text{simple}}(20) = 2^{23} = 8388608, text{o} = 8, text{Mio}$. Every classical computer should be able to do this.
- For $n = 30$ you will need at least $f_{text{simple}}(30) = 2^{33} = 8589934592, text{o} = 8, text{Gio}$. A publicly-accessible laptop is capable of doing it, but old computer may not have a sufficient amount of RAM.
- For $n = 40$ you will need at least $f_{text{simple}}(40) = 2^{43} = 8796093022208, text{o} = 8, text{Tio}$. This is definitely out of reach for publicly-accessible things, you will need access to a computing server.
- For $n = 50$ you will need at least $f_{text{simple}}(50) = 2^{53} = 9007199254740992, text{o} = 8, text{Pio}$. Even Summit, the TOP 1 computer (in terms of FLOPS), cannot simulate this as it "only" has $2.8, text{Pio}$ of RAM.
Of course some clever simulation algorithms are capable of using the specific structure of some quantum programs in order to reduce the needed amount of memory. But for a generic quantum program, this is the quantity of RAM you will need.
Note that I did not speak about computing power. The cost in terms of floating-point operations is generally not a limitation because most of the quantum circuits are a succession of sparse quantum operations (i.e. they are represented by a sparse matrix) and matrix-vector multiplication with a sparse matrix are quite cheap (depending on the sparseness of the matrix).
Nevertheless, note that you may have a $1$-qubit quantum program that contains $10^{30}$ quantum gates. In this case, the simulation algorithm will be time-wise limited, not memory-wise.
The first thing to understand is how quantum operations (i.e. quantum gates) and quantum states are mathematically represented:
- Quantum operations on $n$ qubits are unitary matrices of size $2^n times 2^n$.
- Quantum states on $n$ qubits are complex vectors of size $2^n$.
If you are not 100% sure of theses numbers, you can read more about it in:
- (Almost?) every book on quantum computing. For example, Nielsen & Chuang wrote about that in the very beginning of their book.
- These exponents in base $2$ are due to the way states are composed with tensor product. You can read a little bit more about it here.
Once you are convinced that the numbers I wrote above are valid, you have your answer:
Simulating a quantum device by a classical device is limited by the available RAM memory on the classical computer.
To elaborate a little more, think about how a classical computer would simulate a quantum one. One thing that the classical computer will definitely have to store is the current quantum state of the quantum machine it is simulating. As I wrote in the beginning of my answer, a quantum state is a vector of $2^n$ complex numbers. Now let's compute (in the following, byte == octet):
- The size of a floating-point number is 4 or 8 octets (depending on the precision, i.e.
float
ordouble
, and assuming a non-exotic classical computer). - A complex number is represented by 2 floating-point numbers: one for the real-part and the second for the imaginary-part. So it needs 8 or 16 octets.
- The quantum state needs $2^n$ complex numbers, i.e. $f_{text{simple}}(n) = 2^{3+n}$ octets if you use simple precision or $f_{text{double}}(n) =2^{4+n}$ octets if you use double precision.
Say you want to simulate a n-qubit quantum computer with your classical computer:
- For $n = 10$ you will need at least $f_{text{simple}}(10) = 2^{13} = 8192, text{o} = 8, text{kio}$. Every classical computer should be able to do this.
- For $n = 20$ you will need at least $f_{text{simple}}(20) = 2^{23} = 8388608, text{o} = 8, text{Mio}$. Every classical computer should be able to do this.
- For $n = 30$ you will need at least $f_{text{simple}}(30) = 2^{33} = 8589934592, text{o} = 8, text{Gio}$. A publicly-accessible laptop is capable of doing it, but old computer may not have a sufficient amount of RAM.
- For $n = 40$ you will need at least $f_{text{simple}}(40) = 2^{43} = 8796093022208, text{o} = 8, text{Tio}$. This is definitely out of reach for publicly-accessible things, you will need access to a computing server.
- For $n = 50$ you will need at least $f_{text{simple}}(50) = 2^{53} = 9007199254740992, text{o} = 8, text{Pio}$. Even Summit, the TOP 1 computer (in terms of FLOPS), cannot simulate this as it "only" has $2.8, text{Pio}$ of RAM.
Of course some clever simulation algorithms are capable of using the specific structure of some quantum programs in order to reduce the needed amount of memory. But for a generic quantum program, this is the quantity of RAM you will need.
Note that I did not speak about computing power. The cost in terms of floating-point operations is generally not a limitation because most of the quantum circuits are a succession of sparse quantum operations (i.e. they are represented by a sparse matrix) and matrix-vector multiplication with a sparse matrix are quite cheap (depending on the sparseness of the matrix).
Nevertheless, note that you may have a $1$-qubit quantum program that contains $10^{30}$ quantum gates. In this case, the simulation algorithm will be time-wise limited, not memory-wise.
answered Dec 19 at 8:54
Nelimee
1,330225
1,330225
Nitpick: I think you mean "single precision" not "simple precision". It's also perfectly possible to have different sizes of representation, and to swap working data between RAM and cheaper storage (computers do it all the time; it will just be slower than holding everything in RAM at once). It might make more sense to just point out that 2^50 bytes = 1 pebibyte, so even with a very small representation of each number, and a very large working store, it will be impractical to work with a simulation at that scale.
– IMSoP
Dec 19 at 11:27
add a comment |
Nitpick: I think you mean "single precision" not "simple precision". It's also perfectly possible to have different sizes of representation, and to swap working data between RAM and cheaper storage (computers do it all the time; it will just be slower than holding everything in RAM at once). It might make more sense to just point out that 2^50 bytes = 1 pebibyte, so even with a very small representation of each number, and a very large working store, it will be impractical to work with a simulation at that scale.
– IMSoP
Dec 19 at 11:27
Nitpick: I think you mean "single precision" not "simple precision". It's also perfectly possible to have different sizes of representation, and to swap working data between RAM and cheaper storage (computers do it all the time; it will just be slower than holding everything in RAM at once). It might make more sense to just point out that 2^50 bytes = 1 pebibyte, so even with a very small representation of each number, and a very large working store, it will be impractical to work with a simulation at that scale.
– IMSoP
Dec 19 at 11:27
Nitpick: I think you mean "single precision" not "simple precision". It's also perfectly possible to have different sizes of representation, and to swap working data between RAM and cheaper storage (computers do it all the time; it will just be slower than holding everything in RAM at once). It might make more sense to just point out that 2^50 bytes = 1 pebibyte, so even with a very small representation of each number, and a very large working store, it will be impractical to work with a simulation at that scale.
– IMSoP
Dec 19 at 11:27
add a comment |
The simulation of a quantum computation (some people choose to use the term 'emulation' in this context to disambiguate from a different type of simulation) is when one tries to recreate the calculation that you want a quantum computer to perform, but on a classical computer.
When you're simulating a particular algorithm, there are many different problem instances with problem sizes $n$ (this is usually the number of bits required to specify the problem instance). We say a problem is hard if the time that it takes to run grows quicker than any polynomial in $n$.
Now, it must be emphasised that we don't know that simulating a quantum computer is hard. It's just that we don't know how to do it. And we've tried quite hard. For example, we believe that quantum computers can perform some classical computations that are hard at least as strongly as we believe that there's no efficient classical algorithm for factoring large composite integers (because there's a quantum algorithm that achieves that in polynomial time).
If we could prove that classical computers can simulate quantum computers, then there wouldn't be nearly so much interest in building a quantum computer. That said, simulation is a polynomial overhead equivalence. Quantum computers could still be much faster, which might be desirable in some contexts.
So, your question effectively boils down to "where do quantum computers get their power from"? Variations on this theme have been asked a number of times already on this site. The way that I like to think about it is to recall that classical computers, no matter how complex, are built out of the same fundamental set of gates (indeed, one gate such as NAND is sufficient). If somebody suddenly comes along with an extra gate that cannot be built out of the existing gates, it suddenly gives you the potential to use this gate to improve existing algorithms. Sometimes it'll help, sometimes it won't.
I would just like to point out that one aspect which is not the source of power is the exponential state space. Probabilistic classical computations also have an exponentially large state space, and yet we can still perform them. (Of course, the difference is about how we deal with probabilities. Quantum probabilities can interfere, which means that we have to keep all the paths "alive" as we simulate, rather than just sampling individual paths. But this is a much more subtle issue.)
add a comment |
The simulation of a quantum computation (some people choose to use the term 'emulation' in this context to disambiguate from a different type of simulation) is when one tries to recreate the calculation that you want a quantum computer to perform, but on a classical computer.
When you're simulating a particular algorithm, there are many different problem instances with problem sizes $n$ (this is usually the number of bits required to specify the problem instance). We say a problem is hard if the time that it takes to run grows quicker than any polynomial in $n$.
Now, it must be emphasised that we don't know that simulating a quantum computer is hard. It's just that we don't know how to do it. And we've tried quite hard. For example, we believe that quantum computers can perform some classical computations that are hard at least as strongly as we believe that there's no efficient classical algorithm for factoring large composite integers (because there's a quantum algorithm that achieves that in polynomial time).
If we could prove that classical computers can simulate quantum computers, then there wouldn't be nearly so much interest in building a quantum computer. That said, simulation is a polynomial overhead equivalence. Quantum computers could still be much faster, which might be desirable in some contexts.
So, your question effectively boils down to "where do quantum computers get their power from"? Variations on this theme have been asked a number of times already on this site. The way that I like to think about it is to recall that classical computers, no matter how complex, are built out of the same fundamental set of gates (indeed, one gate such as NAND is sufficient). If somebody suddenly comes along with an extra gate that cannot be built out of the existing gates, it suddenly gives you the potential to use this gate to improve existing algorithms. Sometimes it'll help, sometimes it won't.
I would just like to point out that one aspect which is not the source of power is the exponential state space. Probabilistic classical computations also have an exponentially large state space, and yet we can still perform them. (Of course, the difference is about how we deal with probabilities. Quantum probabilities can interfere, which means that we have to keep all the paths "alive" as we simulate, rather than just sampling individual paths. But this is a much more subtle issue.)
add a comment |
The simulation of a quantum computation (some people choose to use the term 'emulation' in this context to disambiguate from a different type of simulation) is when one tries to recreate the calculation that you want a quantum computer to perform, but on a classical computer.
When you're simulating a particular algorithm, there are many different problem instances with problem sizes $n$ (this is usually the number of bits required to specify the problem instance). We say a problem is hard if the time that it takes to run grows quicker than any polynomial in $n$.
Now, it must be emphasised that we don't know that simulating a quantum computer is hard. It's just that we don't know how to do it. And we've tried quite hard. For example, we believe that quantum computers can perform some classical computations that are hard at least as strongly as we believe that there's no efficient classical algorithm for factoring large composite integers (because there's a quantum algorithm that achieves that in polynomial time).
If we could prove that classical computers can simulate quantum computers, then there wouldn't be nearly so much interest in building a quantum computer. That said, simulation is a polynomial overhead equivalence. Quantum computers could still be much faster, which might be desirable in some contexts.
So, your question effectively boils down to "where do quantum computers get their power from"? Variations on this theme have been asked a number of times already on this site. The way that I like to think about it is to recall that classical computers, no matter how complex, are built out of the same fundamental set of gates (indeed, one gate such as NAND is sufficient). If somebody suddenly comes along with an extra gate that cannot be built out of the existing gates, it suddenly gives you the potential to use this gate to improve existing algorithms. Sometimes it'll help, sometimes it won't.
I would just like to point out that one aspect which is not the source of power is the exponential state space. Probabilistic classical computations also have an exponentially large state space, and yet we can still perform them. (Of course, the difference is about how we deal with probabilities. Quantum probabilities can interfere, which means that we have to keep all the paths "alive" as we simulate, rather than just sampling individual paths. But this is a much more subtle issue.)
The simulation of a quantum computation (some people choose to use the term 'emulation' in this context to disambiguate from a different type of simulation) is when one tries to recreate the calculation that you want a quantum computer to perform, but on a classical computer.
When you're simulating a particular algorithm, there are many different problem instances with problem sizes $n$ (this is usually the number of bits required to specify the problem instance). We say a problem is hard if the time that it takes to run grows quicker than any polynomial in $n$.
Now, it must be emphasised that we don't know that simulating a quantum computer is hard. It's just that we don't know how to do it. And we've tried quite hard. For example, we believe that quantum computers can perform some classical computations that are hard at least as strongly as we believe that there's no efficient classical algorithm for factoring large composite integers (because there's a quantum algorithm that achieves that in polynomial time).
If we could prove that classical computers can simulate quantum computers, then there wouldn't be nearly so much interest in building a quantum computer. That said, simulation is a polynomial overhead equivalence. Quantum computers could still be much faster, which might be desirable in some contexts.
So, your question effectively boils down to "where do quantum computers get their power from"? Variations on this theme have been asked a number of times already on this site. The way that I like to think about it is to recall that classical computers, no matter how complex, are built out of the same fundamental set of gates (indeed, one gate such as NAND is sufficient). If somebody suddenly comes along with an extra gate that cannot be built out of the existing gates, it suddenly gives you the potential to use this gate to improve existing algorithms. Sometimes it'll help, sometimes it won't.
I would just like to point out that one aspect which is not the source of power is the exponential state space. Probabilistic classical computations also have an exponentially large state space, and yet we can still perform them. (Of course, the difference is about how we deal with probabilities. Quantum probabilities can interfere, which means that we have to keep all the paths "alive" as we simulate, rather than just sampling individual paths. But this is a much more subtle issue.)
answered Dec 19 at 10:21
DaftWullie
11.9k1536
11.9k1536
add a comment |
add a comment |
new_bee is a new contributor. Be nice, and check out our Code of Conduct.
new_bee is a new contributor. Be nice, and check out our Code of Conduct.
new_bee is a new contributor. Be nice, and check out our Code of Conduct.
new_bee is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Quantum Computing Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fquantumcomputing.stackexchange.com%2fquestions%2f5005%2fwhy-it-is-hard-to-simulate-a-quantum-device-by-a-classical-devices%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
1
If it were possible, would we need quantum devices at all?
– Mahathi Vempati
Dec 19 at 7:13