Moving brute-force search to FPGA
$begingroup$
I am currently working on a scientific hobby project about computing the error detection capabilities of CRCs. Unfortunately the C++ code used for such computations has up to years of run time on normal x64 CPUs, even on multi core systems. Also the power consumption of such systems is a pain.
It came to my mind that the common way of x64 brute-force-searching isn't the best. I would like to move the algorithm to an FPGA. Alas I have worked very little with FPGAs and I lost the minimal knowledge after working in C/C++ software engineering for decades. So I need a little help about the feasibility of my idea before burying myself into the technology.
The algorithm I want to run in hardware is a specialized ~1000 line C++ code that could easily be ported to C. No floating point operations. No standard libraries required. High frequent loops. Lots of basic 64 bit integer arithmetic. Even more binary operations (shift, or, xor, bit-counting, etc.) and some array operations. A few kB of RAM and ROM should be sufficient. No peripherals required. Very few memory allocations are used that could be removed by adapting the code. The computation results can be easily filtered internally so a serial interface should be enough to pass the results to a PC.
I would like to compile the C++ or C code into VHDL code and let it run on a FPGA as fast as possible. Also, since this is a hobby project, the FPGA (including software and a developer board) should be affordable.
My questions:
- Can I expect a significant speedup? By which order of magnitude?
- Is there a C/C++ compiler suited for the purpose?
- Which FPGAs are suitable?
fpga vhdl component-selection compiler
New contributor
$endgroup$
add a comment |
$begingroup$
I am currently working on a scientific hobby project about computing the error detection capabilities of CRCs. Unfortunately the C++ code used for such computations has up to years of run time on normal x64 CPUs, even on multi core systems. Also the power consumption of such systems is a pain.
It came to my mind that the common way of x64 brute-force-searching isn't the best. I would like to move the algorithm to an FPGA. Alas I have worked very little with FPGAs and I lost the minimal knowledge after working in C/C++ software engineering for decades. So I need a little help about the feasibility of my idea before burying myself into the technology.
The algorithm I want to run in hardware is a specialized ~1000 line C++ code that could easily be ported to C. No floating point operations. No standard libraries required. High frequent loops. Lots of basic 64 bit integer arithmetic. Even more binary operations (shift, or, xor, bit-counting, etc.) and some array operations. A few kB of RAM and ROM should be sufficient. No peripherals required. Very few memory allocations are used that could be removed by adapting the code. The computation results can be easily filtered internally so a serial interface should be enough to pass the results to a PC.
I would like to compile the C++ or C code into VHDL code and let it run on a FPGA as fast as possible. Also, since this is a hobby project, the FPGA (including software and a developer board) should be affordable.
My questions:
- Can I expect a significant speedup? By which order of magnitude?
- Is there a C/C++ compiler suited for the purpose?
- Which FPGAs are suitable?
fpga vhdl component-selection compiler
New contributor
$endgroup$
$begingroup$
Ultimately running an algorithm in an FPGA instead of software can be faster but it really depends on the algorithm details. Essentially you will gain speed if you can parallelize or pipeline the data flow. If 64 simple operations need to be applied to a single point before it can be fully processed, the FPGA can pipeline them so that a new result comes out every clock cycle. But I don't know if your algorithm is like that.
$endgroup$
– mkeith
5 hours ago
$begingroup$
Almost the entire algorithm can be highly parallelized. The algorithm processes a single dataword. For a 32 bit CRC there are ~2^30 datawords á 32 bit that can easily be processed independently (except final comparing/filtering of the results that needs to be serialized).
$endgroup$
– Silicomancer
5 hours ago
$begingroup$
Have you considered using GPU acceleration for this? Implementation will be a lot simpler, as well as less expensive.
$endgroup$
– duskwuff
5 hours ago
add a comment |
$begingroup$
I am currently working on a scientific hobby project about computing the error detection capabilities of CRCs. Unfortunately the C++ code used for such computations has up to years of run time on normal x64 CPUs, even on multi core systems. Also the power consumption of such systems is a pain.
It came to my mind that the common way of x64 brute-force-searching isn't the best. I would like to move the algorithm to an FPGA. Alas I have worked very little with FPGAs and I lost the minimal knowledge after working in C/C++ software engineering for decades. So I need a little help about the feasibility of my idea before burying myself into the technology.
The algorithm I want to run in hardware is a specialized ~1000 line C++ code that could easily be ported to C. No floating point operations. No standard libraries required. High frequent loops. Lots of basic 64 bit integer arithmetic. Even more binary operations (shift, or, xor, bit-counting, etc.) and some array operations. A few kB of RAM and ROM should be sufficient. No peripherals required. Very few memory allocations are used that could be removed by adapting the code. The computation results can be easily filtered internally so a serial interface should be enough to pass the results to a PC.
I would like to compile the C++ or C code into VHDL code and let it run on a FPGA as fast as possible. Also, since this is a hobby project, the FPGA (including software and a developer board) should be affordable.
My questions:
- Can I expect a significant speedup? By which order of magnitude?
- Is there a C/C++ compiler suited for the purpose?
- Which FPGAs are suitable?
fpga vhdl component-selection compiler
New contributor
$endgroup$
I am currently working on a scientific hobby project about computing the error detection capabilities of CRCs. Unfortunately the C++ code used for such computations has up to years of run time on normal x64 CPUs, even on multi core systems. Also the power consumption of such systems is a pain.
It came to my mind that the common way of x64 brute-force-searching isn't the best. I would like to move the algorithm to an FPGA. Alas I have worked very little with FPGAs and I lost the minimal knowledge after working in C/C++ software engineering for decades. So I need a little help about the feasibility of my idea before burying myself into the technology.
The algorithm I want to run in hardware is a specialized ~1000 line C++ code that could easily be ported to C. No floating point operations. No standard libraries required. High frequent loops. Lots of basic 64 bit integer arithmetic. Even more binary operations (shift, or, xor, bit-counting, etc.) and some array operations. A few kB of RAM and ROM should be sufficient. No peripherals required. Very few memory allocations are used that could be removed by adapting the code. The computation results can be easily filtered internally so a serial interface should be enough to pass the results to a PC.
I would like to compile the C++ or C code into VHDL code and let it run on a FPGA as fast as possible. Also, since this is a hobby project, the FPGA (including software and a developer board) should be affordable.
My questions:
- Can I expect a significant speedup? By which order of magnitude?
- Is there a C/C++ compiler suited for the purpose?
- Which FPGAs are suitable?
fpga vhdl component-selection compiler
fpga vhdl component-selection compiler
New contributor
New contributor
edited 5 hours ago
Silicomancer
New contributor
asked 5 hours ago
SilicomancerSilicomancer
1114
1114
New contributor
New contributor
$begingroup$
Ultimately running an algorithm in an FPGA instead of software can be faster but it really depends on the algorithm details. Essentially you will gain speed if you can parallelize or pipeline the data flow. If 64 simple operations need to be applied to a single point before it can be fully processed, the FPGA can pipeline them so that a new result comes out every clock cycle. But I don't know if your algorithm is like that.
$endgroup$
– mkeith
5 hours ago
$begingroup$
Almost the entire algorithm can be highly parallelized. The algorithm processes a single dataword. For a 32 bit CRC there are ~2^30 datawords á 32 bit that can easily be processed independently (except final comparing/filtering of the results that needs to be serialized).
$endgroup$
– Silicomancer
5 hours ago
$begingroup$
Have you considered using GPU acceleration for this? Implementation will be a lot simpler, as well as less expensive.
$endgroup$
– duskwuff
5 hours ago
add a comment |
$begingroup$
Ultimately running an algorithm in an FPGA instead of software can be faster but it really depends on the algorithm details. Essentially you will gain speed if you can parallelize or pipeline the data flow. If 64 simple operations need to be applied to a single point before it can be fully processed, the FPGA can pipeline them so that a new result comes out every clock cycle. But I don't know if your algorithm is like that.
$endgroup$
– mkeith
5 hours ago
$begingroup$
Almost the entire algorithm can be highly parallelized. The algorithm processes a single dataword. For a 32 bit CRC there are ~2^30 datawords á 32 bit that can easily be processed independently (except final comparing/filtering of the results that needs to be serialized).
$endgroup$
– Silicomancer
5 hours ago
$begingroup$
Have you considered using GPU acceleration for this? Implementation will be a lot simpler, as well as less expensive.
$endgroup$
– duskwuff
5 hours ago
$begingroup$
Ultimately running an algorithm in an FPGA instead of software can be faster but it really depends on the algorithm details. Essentially you will gain speed if you can parallelize or pipeline the data flow. If 64 simple operations need to be applied to a single point before it can be fully processed, the FPGA can pipeline them so that a new result comes out every clock cycle. But I don't know if your algorithm is like that.
$endgroup$
– mkeith
5 hours ago
$begingroup$
Ultimately running an algorithm in an FPGA instead of software can be faster but it really depends on the algorithm details. Essentially you will gain speed if you can parallelize or pipeline the data flow. If 64 simple operations need to be applied to a single point before it can be fully processed, the FPGA can pipeline them so that a new result comes out every clock cycle. But I don't know if your algorithm is like that.
$endgroup$
– mkeith
5 hours ago
$begingroup$
Almost the entire algorithm can be highly parallelized. The algorithm processes a single dataword. For a 32 bit CRC there are ~2^30 datawords á 32 bit that can easily be processed independently (except final comparing/filtering of the results that needs to be serialized).
$endgroup$
– Silicomancer
5 hours ago
$begingroup$
Almost the entire algorithm can be highly parallelized. The algorithm processes a single dataword. For a 32 bit CRC there are ~2^30 datawords á 32 bit that can easily be processed independently (except final comparing/filtering of the results that needs to be serialized).
$endgroup$
– Silicomancer
5 hours ago
$begingroup$
Have you considered using GPU acceleration for this? Implementation will be a lot simpler, as well as less expensive.
$endgroup$
– duskwuff
5 hours ago
$begingroup$
Have you considered using GPU acceleration for this? Implementation will be a lot simpler, as well as less expensive.
$endgroup$
– duskwuff
5 hours ago
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
Can I expect a significant speedup? By which order of magnitude?
Sure, by quite a lot. CRCs can be computed on data a byte a a time using a straightforward table lookup. A moderate-sized FPGA (say, a Xilinx XC6SLX75) will have a hundred or more blocks of internal dual-port RAM that allow 200 data streams to be processed in parallel at a rate of one byte per clock cycle, where the clock could be 200 MHz or more. That's a throughput of at least 40 GB/s. How fast is your "x64" CPU?
Is there a C/C++ compiler suited for the purpose?
Not really. If you want to get the most out of your FPGA, you'll want to use an HDL to define the hardware datapath directly. Implementations derived from programming languages are possible, but the performance ranges from lousy to useless.
Which FPGAs are suitable?
That's bordering on a product recommendation, which would be off-topic for this site, but look at the midrange offerings from Xilinx (such as the Spartan-6 series) or Intel (formerly Altera, such as their Cyclone IV series). Inexpensive development boards for these families are readily available from places like Digilent.
$endgroup$
$begingroup$
A table lookup is really not the best way to do it on an FPGA because it only works for small inputs. Since CRC is just a bunch of XOR gates, what you can do is run a wide data bus (and get a really high data rate) and then do an unrolled parallel CRC. You can relatively easily do a CRC 32 over data 64 bits at a time at 400 MHz, which gives you 25 Gbps, then drop a bunch of instances on the FPGA to run in parallel.
$endgroup$
– alex.forencich
42 mins ago
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["\$", "\$"]]);
});
});
}, "mathjax-editing");
StackExchange.ifUsing("editor", function () {
return StackExchange.using("schematics", function () {
StackExchange.schematics.init();
});
}, "cicuitlab");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "135"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Silicomancer is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2felectronics.stackexchange.com%2fquestions%2f428626%2fmoving-brute-force-search-to-fpga%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Can I expect a significant speedup? By which order of magnitude?
Sure, by quite a lot. CRCs can be computed on data a byte a a time using a straightforward table lookup. A moderate-sized FPGA (say, a Xilinx XC6SLX75) will have a hundred or more blocks of internal dual-port RAM that allow 200 data streams to be processed in parallel at a rate of one byte per clock cycle, where the clock could be 200 MHz or more. That's a throughput of at least 40 GB/s. How fast is your "x64" CPU?
Is there a C/C++ compiler suited for the purpose?
Not really. If you want to get the most out of your FPGA, you'll want to use an HDL to define the hardware datapath directly. Implementations derived from programming languages are possible, but the performance ranges from lousy to useless.
Which FPGAs are suitable?
That's bordering on a product recommendation, which would be off-topic for this site, but look at the midrange offerings from Xilinx (such as the Spartan-6 series) or Intel (formerly Altera, such as their Cyclone IV series). Inexpensive development boards for these families are readily available from places like Digilent.
$endgroup$
$begingroup$
A table lookup is really not the best way to do it on an FPGA because it only works for small inputs. Since CRC is just a bunch of XOR gates, what you can do is run a wide data bus (and get a really high data rate) and then do an unrolled parallel CRC. You can relatively easily do a CRC 32 over data 64 bits at a time at 400 MHz, which gives you 25 Gbps, then drop a bunch of instances on the FPGA to run in parallel.
$endgroup$
– alex.forencich
42 mins ago
add a comment |
$begingroup$
Can I expect a significant speedup? By which order of magnitude?
Sure, by quite a lot. CRCs can be computed on data a byte a a time using a straightforward table lookup. A moderate-sized FPGA (say, a Xilinx XC6SLX75) will have a hundred or more blocks of internal dual-port RAM that allow 200 data streams to be processed in parallel at a rate of one byte per clock cycle, where the clock could be 200 MHz or more. That's a throughput of at least 40 GB/s. How fast is your "x64" CPU?
Is there a C/C++ compiler suited for the purpose?
Not really. If you want to get the most out of your FPGA, you'll want to use an HDL to define the hardware datapath directly. Implementations derived from programming languages are possible, but the performance ranges from lousy to useless.
Which FPGAs are suitable?
That's bordering on a product recommendation, which would be off-topic for this site, but look at the midrange offerings from Xilinx (such as the Spartan-6 series) or Intel (formerly Altera, such as their Cyclone IV series). Inexpensive development boards for these families are readily available from places like Digilent.
$endgroup$
$begingroup$
A table lookup is really not the best way to do it on an FPGA because it only works for small inputs. Since CRC is just a bunch of XOR gates, what you can do is run a wide data bus (and get a really high data rate) and then do an unrolled parallel CRC. You can relatively easily do a CRC 32 over data 64 bits at a time at 400 MHz, which gives you 25 Gbps, then drop a bunch of instances on the FPGA to run in parallel.
$endgroup$
– alex.forencich
42 mins ago
add a comment |
$begingroup$
Can I expect a significant speedup? By which order of magnitude?
Sure, by quite a lot. CRCs can be computed on data a byte a a time using a straightforward table lookup. A moderate-sized FPGA (say, a Xilinx XC6SLX75) will have a hundred or more blocks of internal dual-port RAM that allow 200 data streams to be processed in parallel at a rate of one byte per clock cycle, where the clock could be 200 MHz or more. That's a throughput of at least 40 GB/s. How fast is your "x64" CPU?
Is there a C/C++ compiler suited for the purpose?
Not really. If you want to get the most out of your FPGA, you'll want to use an HDL to define the hardware datapath directly. Implementations derived from programming languages are possible, but the performance ranges from lousy to useless.
Which FPGAs are suitable?
That's bordering on a product recommendation, which would be off-topic for this site, but look at the midrange offerings from Xilinx (such as the Spartan-6 series) or Intel (formerly Altera, such as their Cyclone IV series). Inexpensive development boards for these families are readily available from places like Digilent.
$endgroup$
Can I expect a significant speedup? By which order of magnitude?
Sure, by quite a lot. CRCs can be computed on data a byte a a time using a straightforward table lookup. A moderate-sized FPGA (say, a Xilinx XC6SLX75) will have a hundred or more blocks of internal dual-port RAM that allow 200 data streams to be processed in parallel at a rate of one byte per clock cycle, where the clock could be 200 MHz or more. That's a throughput of at least 40 GB/s. How fast is your "x64" CPU?
Is there a C/C++ compiler suited for the purpose?
Not really. If you want to get the most out of your FPGA, you'll want to use an HDL to define the hardware datapath directly. Implementations derived from programming languages are possible, but the performance ranges from lousy to useless.
Which FPGAs are suitable?
That's bordering on a product recommendation, which would be off-topic for this site, but look at the midrange offerings from Xilinx (such as the Spartan-6 series) or Intel (formerly Altera, such as their Cyclone IV series). Inexpensive development boards for these families are readily available from places like Digilent.
answered 5 hours ago
Dave Tweed♦Dave Tweed
121k9152263
121k9152263
$begingroup$
A table lookup is really not the best way to do it on an FPGA because it only works for small inputs. Since CRC is just a bunch of XOR gates, what you can do is run a wide data bus (and get a really high data rate) and then do an unrolled parallel CRC. You can relatively easily do a CRC 32 over data 64 bits at a time at 400 MHz, which gives you 25 Gbps, then drop a bunch of instances on the FPGA to run in parallel.
$endgroup$
– alex.forencich
42 mins ago
add a comment |
$begingroup$
A table lookup is really not the best way to do it on an FPGA because it only works for small inputs. Since CRC is just a bunch of XOR gates, what you can do is run a wide data bus (and get a really high data rate) and then do an unrolled parallel CRC. You can relatively easily do a CRC 32 over data 64 bits at a time at 400 MHz, which gives you 25 Gbps, then drop a bunch of instances on the FPGA to run in parallel.
$endgroup$
– alex.forencich
42 mins ago
$begingroup$
A table lookup is really not the best way to do it on an FPGA because it only works for small inputs. Since CRC is just a bunch of XOR gates, what you can do is run a wide data bus (and get a really high data rate) and then do an unrolled parallel CRC. You can relatively easily do a CRC 32 over data 64 bits at a time at 400 MHz, which gives you 25 Gbps, then drop a bunch of instances on the FPGA to run in parallel.
$endgroup$
– alex.forencich
42 mins ago
$begingroup$
A table lookup is really not the best way to do it on an FPGA because it only works for small inputs. Since CRC is just a bunch of XOR gates, what you can do is run a wide data bus (and get a really high data rate) and then do an unrolled parallel CRC. You can relatively easily do a CRC 32 over data 64 bits at a time at 400 MHz, which gives you 25 Gbps, then drop a bunch of instances on the FPGA to run in parallel.
$endgroup$
– alex.forencich
42 mins ago
add a comment |
Silicomancer is a new contributor. Be nice, and check out our Code of Conduct.
Silicomancer is a new contributor. Be nice, and check out our Code of Conduct.
Silicomancer is a new contributor. Be nice, and check out our Code of Conduct.
Silicomancer is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Electrical Engineering Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2felectronics.stackexchange.com%2fquestions%2f428626%2fmoving-brute-force-search-to-fpga%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
Ultimately running an algorithm in an FPGA instead of software can be faster but it really depends on the algorithm details. Essentially you will gain speed if you can parallelize or pipeline the data flow. If 64 simple operations need to be applied to a single point before it can be fully processed, the FPGA can pipeline them so that a new result comes out every clock cycle. But I don't know if your algorithm is like that.
$endgroup$
– mkeith
5 hours ago
$begingroup$
Almost the entire algorithm can be highly parallelized. The algorithm processes a single dataword. For a 32 bit CRC there are ~2^30 datawords á 32 bit that can easily be processed independently (except final comparing/filtering of the results that needs to be serialized).
$endgroup$
– Silicomancer
5 hours ago
$begingroup$
Have you considered using GPU acceleration for this? Implementation will be a lot simpler, as well as less expensive.
$endgroup$
– duskwuff
5 hours ago