Batch gradient descent and stochastic gradient descent











up vote
1
down vote

favorite












I'm trying to implement logistic regression and I believe my batch gradient descent is correct or at least it works well enough to give me decent accuracy for the dataset I'm using. When I use stochastic gradient descent I'm getting really poor accuracy so I'm not sure if it's my learning rate, epochs or just my code is incorrect. Also I'm wondering how would I add regularization to both of these? Do I add a variable lambda and multiply it by the learning rate or is the more to it?



BGD:



def batch_gradient(df, weights, bias, lr, epochs):
X = df.values
y = X[:,:1]
X = X[:,1:]
length = X.shape[0]
for i in range(epochs):
output = (sigmoid((np.dot(weights, X.T)+bias)))
weights_tmp = (1/length) * (np.dot(X.T, (output - y.T).T))
bias_tmp = (1/length) * (np.sum(output - y.T))

weights -= (lr * (weights_tmp.T))
bias -= (lr * bias_tmp)

return weights, bias


SGD:



def stochastic_gradient(df, weights, bias, lr, epochs):
x_matrix = df.values
for i in range(epochs):
np.random.shuffle(x_matrix)
x_instance = x_matrix[np.random.choice(x_matrix.shape[0], 1, replace=True)]
y = x_instance[:,:1]

output = sigmoid(np.dot(weights, x_instance[:,1:].T) + bias)
weights_tmp = lr * np.dot(x_instance[:,1:].T, ((output - y)))

weights = (weights - weights_tmp.T)
bias -= lr * (output - y)

return weights, bias









share|improve this question




























    up vote
    1
    down vote

    favorite












    I'm trying to implement logistic regression and I believe my batch gradient descent is correct or at least it works well enough to give me decent accuracy for the dataset I'm using. When I use stochastic gradient descent I'm getting really poor accuracy so I'm not sure if it's my learning rate, epochs or just my code is incorrect. Also I'm wondering how would I add regularization to both of these? Do I add a variable lambda and multiply it by the learning rate or is the more to it?



    BGD:



    def batch_gradient(df, weights, bias, lr, epochs):
    X = df.values
    y = X[:,:1]
    X = X[:,1:]
    length = X.shape[0]
    for i in range(epochs):
    output = (sigmoid((np.dot(weights, X.T)+bias)))
    weights_tmp = (1/length) * (np.dot(X.T, (output - y.T).T))
    bias_tmp = (1/length) * (np.sum(output - y.T))

    weights -= (lr * (weights_tmp.T))
    bias -= (lr * bias_tmp)

    return weights, bias


    SGD:



    def stochastic_gradient(df, weights, bias, lr, epochs):
    x_matrix = df.values
    for i in range(epochs):
    np.random.shuffle(x_matrix)
    x_instance = x_matrix[np.random.choice(x_matrix.shape[0], 1, replace=True)]
    y = x_instance[:,:1]

    output = sigmoid(np.dot(weights, x_instance[:,1:].T) + bias)
    weights_tmp = lr * np.dot(x_instance[:,1:].T, ((output - y)))

    weights = (weights - weights_tmp.T)
    bias -= lr * (output - y)

    return weights, bias









    share|improve this question


























      up vote
      1
      down vote

      favorite









      up vote
      1
      down vote

      favorite











      I'm trying to implement logistic regression and I believe my batch gradient descent is correct or at least it works well enough to give me decent accuracy for the dataset I'm using. When I use stochastic gradient descent I'm getting really poor accuracy so I'm not sure if it's my learning rate, epochs or just my code is incorrect. Also I'm wondering how would I add regularization to both of these? Do I add a variable lambda and multiply it by the learning rate or is the more to it?



      BGD:



      def batch_gradient(df, weights, bias, lr, epochs):
      X = df.values
      y = X[:,:1]
      X = X[:,1:]
      length = X.shape[0]
      for i in range(epochs):
      output = (sigmoid((np.dot(weights, X.T)+bias)))
      weights_tmp = (1/length) * (np.dot(X.T, (output - y.T).T))
      bias_tmp = (1/length) * (np.sum(output - y.T))

      weights -= (lr * (weights_tmp.T))
      bias -= (lr * bias_tmp)

      return weights, bias


      SGD:



      def stochastic_gradient(df, weights, bias, lr, epochs):
      x_matrix = df.values
      for i in range(epochs):
      np.random.shuffle(x_matrix)
      x_instance = x_matrix[np.random.choice(x_matrix.shape[0], 1, replace=True)]
      y = x_instance[:,:1]

      output = sigmoid(np.dot(weights, x_instance[:,1:].T) + bias)
      weights_tmp = lr * np.dot(x_instance[:,1:].T, ((output - y)))

      weights = (weights - weights_tmp.T)
      bias -= lr * (output - y)

      return weights, bias









      share|improve this question















      I'm trying to implement logistic regression and I believe my batch gradient descent is correct or at least it works well enough to give me decent accuracy for the dataset I'm using. When I use stochastic gradient descent I'm getting really poor accuracy so I'm not sure if it's my learning rate, epochs or just my code is incorrect. Also I'm wondering how would I add regularization to both of these? Do I add a variable lambda and multiply it by the learning rate or is the more to it?



      BGD:



      def batch_gradient(df, weights, bias, lr, epochs):
      X = df.values
      y = X[:,:1]
      X = X[:,1:]
      length = X.shape[0]
      for i in range(epochs):
      output = (sigmoid((np.dot(weights, X.T)+bias)))
      weights_tmp = (1/length) * (np.dot(X.T, (output - y.T).T))
      bias_tmp = (1/length) * (np.sum(output - y.T))

      weights -= (lr * (weights_tmp.T))
      bias -= (lr * bias_tmp)

      return weights, bias


      SGD:



      def stochastic_gradient(df, weights, bias, lr, epochs):
      x_matrix = df.values
      for i in range(epochs):
      np.random.shuffle(x_matrix)
      x_instance = x_matrix[np.random.choice(x_matrix.shape[0], 1, replace=True)]
      y = x_instance[:,:1]

      output = sigmoid(np.dot(weights, x_instance[:,1:].T) + bias)
      weights_tmp = lr * np.dot(x_instance[:,1:].T, ((output - y)))

      weights = (weights - weights_tmp.T)
      bias -= lr * (output - y)

      return weights, bias






      python numpy pandas






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Nov 26 at 4:19









      Jamal

      30.2k11115226




      30.2k11115226










      asked Nov 26 at 0:53









      jj2593

      61




      61



























          active

          oldest

          votes











          Your Answer





          StackExchange.ifUsing("editor", function () {
          return StackExchange.using("mathjaxEditing", function () {
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["\$", "\$"]]);
          });
          });
          }, "mathjax-editing");

          StackExchange.ifUsing("editor", function () {
          StackExchange.using("externalEditor", function () {
          StackExchange.using("snippets", function () {
          StackExchange.snippets.init();
          });
          });
          }, "code-snippets");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "196"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcodereview.stackexchange.com%2fquestions%2f208412%2fbatch-gradient-descent-and-stochastic-gradient-descent%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown






























          active

          oldest

          votes













          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes
















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Code Review Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.





          Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


          Please pay close attention to the following guidance:


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcodereview.stackexchange.com%2fquestions%2f208412%2fbatch-gradient-descent-and-stochastic-gradient-descent%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Сан-Квентин

          Алькесар

          Josef Freinademetz