Run two computations in parallel, and return the results together











up vote
1
down vote

favorite












I am reading the "Learning Concurrent Programming in Scala" and there are exercises in the end of each chapter.
One of exercises is




Implement a parallel method which takes two computation blocks a and
b, and starts each of them in a new thread. The method must return a
tuple with the result values of both the computations. It should have
the following signature: def parallel[A, B](a: => A, b: => B):(A, B)




I have implemented it in this way:



def parallel[A, B](a: => A, b: => B):(A, B) = {
var aResult:Option[A] = Option.empty
var bResult:Option[B] = Option.empty
val t1 = thread{aResult = Some(a)}
val t2 = thread{bResult = Some(b)}
t1.join()
t2.join()
(aResult.get, bResult.get)
}


where thread is



def thread(block: =>Unit):Thread = {
val t = new Thread{
override def run(): Unit = block
}
t.start()
t
}


My question is if this implementation is free of race condition? I have seen that other people had a similar implementations but declared result holder variables aResult and bResult to be volatile. I am not sure that this is necessary, and also I was not able to design a test which would break the correctness of my implementation.



From my point of view the implementation is free of race condition because the parallel method does not access any shared variables. All its state is in its local variables so is not shared. Thus I think that adding @volatile to aResult and bResult is not needed in this case.










share|improve this question




















  • 1




    "is good enough?" That's a subjective question, one we can't answer with the current amount of information. Please clarify the title, since the rest of the question doesn't have the same issue.
    – Mast
    Sep 19 at 7:06






  • 1




    Edited. I am interested if my implementation is free of race condition and if not then how to demonstrate it with some test.
    – Alexander Arendar
    Sep 19 at 7:08










  • Welcome to Code Review! I changed the title so that it describes what the code does per site goals: "State what your code does in your title, not your main concerns about it.". Please check that I haven't misrepresented your code, and correct it if I have.
    – Toby Speight
    Sep 19 at 10:14










  • Thanks for corrections and welcome words. Let's see know if someone can provide some constructive feedback :)
    – Alexander Arendar
    Sep 19 at 10:29















up vote
1
down vote

favorite












I am reading the "Learning Concurrent Programming in Scala" and there are exercises in the end of each chapter.
One of exercises is




Implement a parallel method which takes two computation blocks a and
b, and starts each of them in a new thread. The method must return a
tuple with the result values of both the computations. It should have
the following signature: def parallel[A, B](a: => A, b: => B):(A, B)




I have implemented it in this way:



def parallel[A, B](a: => A, b: => B):(A, B) = {
var aResult:Option[A] = Option.empty
var bResult:Option[B] = Option.empty
val t1 = thread{aResult = Some(a)}
val t2 = thread{bResult = Some(b)}
t1.join()
t2.join()
(aResult.get, bResult.get)
}


where thread is



def thread(block: =>Unit):Thread = {
val t = new Thread{
override def run(): Unit = block
}
t.start()
t
}


My question is if this implementation is free of race condition? I have seen that other people had a similar implementations but declared result holder variables aResult and bResult to be volatile. I am not sure that this is necessary, and also I was not able to design a test which would break the correctness of my implementation.



From my point of view the implementation is free of race condition because the parallel method does not access any shared variables. All its state is in its local variables so is not shared. Thus I think that adding @volatile to aResult and bResult is not needed in this case.










share|improve this question




















  • 1




    "is good enough?" That's a subjective question, one we can't answer with the current amount of information. Please clarify the title, since the rest of the question doesn't have the same issue.
    – Mast
    Sep 19 at 7:06






  • 1




    Edited. I am interested if my implementation is free of race condition and if not then how to demonstrate it with some test.
    – Alexander Arendar
    Sep 19 at 7:08










  • Welcome to Code Review! I changed the title so that it describes what the code does per site goals: "State what your code does in your title, not your main concerns about it.". Please check that I haven't misrepresented your code, and correct it if I have.
    – Toby Speight
    Sep 19 at 10:14










  • Thanks for corrections and welcome words. Let's see know if someone can provide some constructive feedback :)
    – Alexander Arendar
    Sep 19 at 10:29













up vote
1
down vote

favorite









up vote
1
down vote

favorite











I am reading the "Learning Concurrent Programming in Scala" and there are exercises in the end of each chapter.
One of exercises is




Implement a parallel method which takes two computation blocks a and
b, and starts each of them in a new thread. The method must return a
tuple with the result values of both the computations. It should have
the following signature: def parallel[A, B](a: => A, b: => B):(A, B)




I have implemented it in this way:



def parallel[A, B](a: => A, b: => B):(A, B) = {
var aResult:Option[A] = Option.empty
var bResult:Option[B] = Option.empty
val t1 = thread{aResult = Some(a)}
val t2 = thread{bResult = Some(b)}
t1.join()
t2.join()
(aResult.get, bResult.get)
}


where thread is



def thread(block: =>Unit):Thread = {
val t = new Thread{
override def run(): Unit = block
}
t.start()
t
}


My question is if this implementation is free of race condition? I have seen that other people had a similar implementations but declared result holder variables aResult and bResult to be volatile. I am not sure that this is necessary, and also I was not able to design a test which would break the correctness of my implementation.



From my point of view the implementation is free of race condition because the parallel method does not access any shared variables. All its state is in its local variables so is not shared. Thus I think that adding @volatile to aResult and bResult is not needed in this case.










share|improve this question















I am reading the "Learning Concurrent Programming in Scala" and there are exercises in the end of each chapter.
One of exercises is




Implement a parallel method which takes two computation blocks a and
b, and starts each of them in a new thread. The method must return a
tuple with the result values of both the computations. It should have
the following signature: def parallel[A, B](a: => A, b: => B):(A, B)




I have implemented it in this way:



def parallel[A, B](a: => A, b: => B):(A, B) = {
var aResult:Option[A] = Option.empty
var bResult:Option[B] = Option.empty
val t1 = thread{aResult = Some(a)}
val t2 = thread{bResult = Some(b)}
t1.join()
t2.join()
(aResult.get, bResult.get)
}


where thread is



def thread(block: =>Unit):Thread = {
val t = new Thread{
override def run(): Unit = block
}
t.start()
t
}


My question is if this implementation is free of race condition? I have seen that other people had a similar implementations but declared result holder variables aResult and bResult to be volatile. I am not sure that this is necessary, and also I was not able to design a test which would break the correctness of my implementation.



From my point of view the implementation is free of race condition because the parallel method does not access any shared variables. All its state is in its local variables so is not shared. Thus I think that adding @volatile to aResult and bResult is not needed in this case.







scala thread-safety concurrency






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Sep 19 at 11:33

























asked Sep 19 at 6:58









Alexander Arendar

1063




1063








  • 1




    "is good enough?" That's a subjective question, one we can't answer with the current amount of information. Please clarify the title, since the rest of the question doesn't have the same issue.
    – Mast
    Sep 19 at 7:06






  • 1




    Edited. I am interested if my implementation is free of race condition and if not then how to demonstrate it with some test.
    – Alexander Arendar
    Sep 19 at 7:08










  • Welcome to Code Review! I changed the title so that it describes what the code does per site goals: "State what your code does in your title, not your main concerns about it.". Please check that I haven't misrepresented your code, and correct it if I have.
    – Toby Speight
    Sep 19 at 10:14










  • Thanks for corrections and welcome words. Let's see know if someone can provide some constructive feedback :)
    – Alexander Arendar
    Sep 19 at 10:29














  • 1




    "is good enough?" That's a subjective question, one we can't answer with the current amount of information. Please clarify the title, since the rest of the question doesn't have the same issue.
    – Mast
    Sep 19 at 7:06






  • 1




    Edited. I am interested if my implementation is free of race condition and if not then how to demonstrate it with some test.
    – Alexander Arendar
    Sep 19 at 7:08










  • Welcome to Code Review! I changed the title so that it describes what the code does per site goals: "State what your code does in your title, not your main concerns about it.". Please check that I haven't misrepresented your code, and correct it if I have.
    – Toby Speight
    Sep 19 at 10:14










  • Thanks for corrections and welcome words. Let's see know if someone can provide some constructive feedback :)
    – Alexander Arendar
    Sep 19 at 10:29








1




1




"is good enough?" That's a subjective question, one we can't answer with the current amount of information. Please clarify the title, since the rest of the question doesn't have the same issue.
– Mast
Sep 19 at 7:06




"is good enough?" That's a subjective question, one we can't answer with the current amount of information. Please clarify the title, since the rest of the question doesn't have the same issue.
– Mast
Sep 19 at 7:06




1




1




Edited. I am interested if my implementation is free of race condition and if not then how to demonstrate it with some test.
– Alexander Arendar
Sep 19 at 7:08




Edited. I am interested if my implementation is free of race condition and if not then how to demonstrate it with some test.
– Alexander Arendar
Sep 19 at 7:08












Welcome to Code Review! I changed the title so that it describes what the code does per site goals: "State what your code does in your title, not your main concerns about it.". Please check that I haven't misrepresented your code, and correct it if I have.
– Toby Speight
Sep 19 at 10:14




Welcome to Code Review! I changed the title so that it describes what the code does per site goals: "State what your code does in your title, not your main concerns about it.". Please check that I haven't misrepresented your code, and correct it if I have.
– Toby Speight
Sep 19 at 10:14












Thanks for corrections and welcome words. Let's see know if someone can provide some constructive feedback :)
– Alexander Arendar
Sep 19 at 10:29




Thanks for corrections and welcome words. Let's see know if someone can provide some constructive feedback :)
– Alexander Arendar
Sep 19 at 10:29










1 Answer
1






active

oldest

votes

















up vote
0
down vote














From my point of view the implementation is free of race condition because the parallel method does not access any shared variables. All its state is in its local variables so is not shared.




That is not true. The variable aResult is shared between the thread that called parallel and the thread t1. So there needs to be some synchronization between these two threads, otherwise you run into undefined behaviour.





This low-level JDK memory model gets tricky, so I would try to avoid it as far as possible, i.e. use higher-level abstractions and prefer immutable state.



I suppose the following (still rather low-level) tools (which would solve these issues) are not available for this exercise:




  • Scala Futures

  • Java Callables (which return a result instead of having to publish it somewhere)

  • java.util.concurrent.AtomicReference (which you can use as a thread-safe holder if you have to use Runnable instead of Callable)




As for the specific question: According to the Java Memory Model FAQ




All actions in a thread happen before any other thread successfully returns from a join() on that thread.




Meaning that after your master thread has joined the first worker thread, it will read the updated value of aResult without that field needing to be volatile.



So your code looks correct to me.



But I had to look this up. Really try to avoid mutable state when multiple threads are concerned.






share|improve this answer























  • Thilo, the point is that I am doing exercises which require to use low-level concurrency primitives. So I would not use them in real-life of course, but as far as I learn - I need to take this seriously and try to understand. Also parallel is not a thread, but just a method.
    – Alexander Arendar
    Oct 1 at 9:47












  • parallel is a method, but some thread will be calling it. And this thread (which I called the "master thread" above) needs to communicate with the two threads started within the method. But according to the FAQ, it should work, because you use join. Without using thread synchronization (like join, start, volatile or synchronized) you have no guarantees that data properly gets passed between threads.
    – Thilo
    Oct 1 at 9:52













Your Answer





StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["\$", "\$"]]);
});
});
}, "mathjax-editing");

StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "196"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcodereview.stackexchange.com%2fquestions%2f203972%2frun-two-computations-in-parallel-and-return-the-results-together%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes








up vote
0
down vote














From my point of view the implementation is free of race condition because the parallel method does not access any shared variables. All its state is in its local variables so is not shared.




That is not true. The variable aResult is shared between the thread that called parallel and the thread t1. So there needs to be some synchronization between these two threads, otherwise you run into undefined behaviour.





This low-level JDK memory model gets tricky, so I would try to avoid it as far as possible, i.e. use higher-level abstractions and prefer immutable state.



I suppose the following (still rather low-level) tools (which would solve these issues) are not available for this exercise:




  • Scala Futures

  • Java Callables (which return a result instead of having to publish it somewhere)

  • java.util.concurrent.AtomicReference (which you can use as a thread-safe holder if you have to use Runnable instead of Callable)




As for the specific question: According to the Java Memory Model FAQ




All actions in a thread happen before any other thread successfully returns from a join() on that thread.




Meaning that after your master thread has joined the first worker thread, it will read the updated value of aResult without that field needing to be volatile.



So your code looks correct to me.



But I had to look this up. Really try to avoid mutable state when multiple threads are concerned.






share|improve this answer























  • Thilo, the point is that I am doing exercises which require to use low-level concurrency primitives. So I would not use them in real-life of course, but as far as I learn - I need to take this seriously and try to understand. Also parallel is not a thread, but just a method.
    – Alexander Arendar
    Oct 1 at 9:47












  • parallel is a method, but some thread will be calling it. And this thread (which I called the "master thread" above) needs to communicate with the two threads started within the method. But according to the FAQ, it should work, because you use join. Without using thread synchronization (like join, start, volatile or synchronized) you have no guarantees that data properly gets passed between threads.
    – Thilo
    Oct 1 at 9:52

















up vote
0
down vote














From my point of view the implementation is free of race condition because the parallel method does not access any shared variables. All its state is in its local variables so is not shared.




That is not true. The variable aResult is shared between the thread that called parallel and the thread t1. So there needs to be some synchronization between these two threads, otherwise you run into undefined behaviour.





This low-level JDK memory model gets tricky, so I would try to avoid it as far as possible, i.e. use higher-level abstractions and prefer immutable state.



I suppose the following (still rather low-level) tools (which would solve these issues) are not available for this exercise:




  • Scala Futures

  • Java Callables (which return a result instead of having to publish it somewhere)

  • java.util.concurrent.AtomicReference (which you can use as a thread-safe holder if you have to use Runnable instead of Callable)




As for the specific question: According to the Java Memory Model FAQ




All actions in a thread happen before any other thread successfully returns from a join() on that thread.




Meaning that after your master thread has joined the first worker thread, it will read the updated value of aResult without that field needing to be volatile.



So your code looks correct to me.



But I had to look this up. Really try to avoid mutable state when multiple threads are concerned.






share|improve this answer























  • Thilo, the point is that I am doing exercises which require to use low-level concurrency primitives. So I would not use them in real-life of course, but as far as I learn - I need to take this seriously and try to understand. Also parallel is not a thread, but just a method.
    – Alexander Arendar
    Oct 1 at 9:47












  • parallel is a method, but some thread will be calling it. And this thread (which I called the "master thread" above) needs to communicate with the two threads started within the method. But according to the FAQ, it should work, because you use join. Without using thread synchronization (like join, start, volatile or synchronized) you have no guarantees that data properly gets passed between threads.
    – Thilo
    Oct 1 at 9:52















up vote
0
down vote










up vote
0
down vote










From my point of view the implementation is free of race condition because the parallel method does not access any shared variables. All its state is in its local variables so is not shared.




That is not true. The variable aResult is shared between the thread that called parallel and the thread t1. So there needs to be some synchronization between these two threads, otherwise you run into undefined behaviour.





This low-level JDK memory model gets tricky, so I would try to avoid it as far as possible, i.e. use higher-level abstractions and prefer immutable state.



I suppose the following (still rather low-level) tools (which would solve these issues) are not available for this exercise:




  • Scala Futures

  • Java Callables (which return a result instead of having to publish it somewhere)

  • java.util.concurrent.AtomicReference (which you can use as a thread-safe holder if you have to use Runnable instead of Callable)




As for the specific question: According to the Java Memory Model FAQ




All actions in a thread happen before any other thread successfully returns from a join() on that thread.




Meaning that after your master thread has joined the first worker thread, it will read the updated value of aResult without that field needing to be volatile.



So your code looks correct to me.



But I had to look this up. Really try to avoid mutable state when multiple threads are concerned.






share|improve this answer















From my point of view the implementation is free of race condition because the parallel method does not access any shared variables. All its state is in its local variables so is not shared.




That is not true. The variable aResult is shared between the thread that called parallel and the thread t1. So there needs to be some synchronization between these two threads, otherwise you run into undefined behaviour.





This low-level JDK memory model gets tricky, so I would try to avoid it as far as possible, i.e. use higher-level abstractions and prefer immutable state.



I suppose the following (still rather low-level) tools (which would solve these issues) are not available for this exercise:




  • Scala Futures

  • Java Callables (which return a result instead of having to publish it somewhere)

  • java.util.concurrent.AtomicReference (which you can use as a thread-safe holder if you have to use Runnable instead of Callable)




As for the specific question: According to the Java Memory Model FAQ




All actions in a thread happen before any other thread successfully returns from a join() on that thread.




Meaning that after your master thread has joined the first worker thread, it will read the updated value of aResult without that field needing to be volatile.



So your code looks correct to me.



But I had to look this up. Really try to avoid mutable state when multiple threads are concerned.







share|improve this answer














share|improve this answer



share|improve this answer








edited Sep 27 at 14:23

























answered Sep 27 at 14:15









Thilo

539




539












  • Thilo, the point is that I am doing exercises which require to use low-level concurrency primitives. So I would not use them in real-life of course, but as far as I learn - I need to take this seriously and try to understand. Also parallel is not a thread, but just a method.
    – Alexander Arendar
    Oct 1 at 9:47












  • parallel is a method, but some thread will be calling it. And this thread (which I called the "master thread" above) needs to communicate with the two threads started within the method. But according to the FAQ, it should work, because you use join. Without using thread synchronization (like join, start, volatile or synchronized) you have no guarantees that data properly gets passed between threads.
    – Thilo
    Oct 1 at 9:52




















  • Thilo, the point is that I am doing exercises which require to use low-level concurrency primitives. So I would not use them in real-life of course, but as far as I learn - I need to take this seriously and try to understand. Also parallel is not a thread, but just a method.
    – Alexander Arendar
    Oct 1 at 9:47












  • parallel is a method, but some thread will be calling it. And this thread (which I called the "master thread" above) needs to communicate with the two threads started within the method. But according to the FAQ, it should work, because you use join. Without using thread synchronization (like join, start, volatile or synchronized) you have no guarantees that data properly gets passed between threads.
    – Thilo
    Oct 1 at 9:52


















Thilo, the point is that I am doing exercises which require to use low-level concurrency primitives. So I would not use them in real-life of course, but as far as I learn - I need to take this seriously and try to understand. Also parallel is not a thread, but just a method.
– Alexander Arendar
Oct 1 at 9:47






Thilo, the point is that I am doing exercises which require to use low-level concurrency primitives. So I would not use them in real-life of course, but as far as I learn - I need to take this seriously and try to understand. Also parallel is not a thread, but just a method.
– Alexander Arendar
Oct 1 at 9:47














parallel is a method, but some thread will be calling it. And this thread (which I called the "master thread" above) needs to communicate with the two threads started within the method. But according to the FAQ, it should work, because you use join. Without using thread synchronization (like join, start, volatile or synchronized) you have no guarantees that data properly gets passed between threads.
– Thilo
Oct 1 at 9:52






parallel is a method, but some thread will be calling it. And this thread (which I called the "master thread" above) needs to communicate with the two threads started within the method. But according to the FAQ, it should work, because you use join. Without using thread synchronization (like join, start, volatile or synchronized) you have no guarantees that data properly gets passed between threads.
– Thilo
Oct 1 at 9:52




















draft saved

draft discarded




















































Thanks for contributing an answer to Code Review Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.





Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


Please pay close attention to the following guidance:


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcodereview.stackexchange.com%2fquestions%2f203972%2frun-two-computations-in-parallel-and-return-the-results-together%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Сан-Квентин

Алькесар

Josef Freinademetz