Batch gradient descent and stochastic gradient descent
up vote
1
down vote
favorite
I'm trying to implement logistic regression and I believe my batch gradient descent is correct or at least it works well enough to give me decent accuracy for the dataset I'm using. When I use stochastic gradient descent I'm getting really poor accuracy so I'm not sure if it's my learning rate, epochs or just my code is incorrect. Also I'm wondering how would I add regularization to both of these? Do I add a variable lambda and multiply it by the learning rate or is the more to it?
BGD:
def batch_gradient(df, weights, bias, lr, epochs):
X = df.values
y = X[:,:1]
X = X[:,1:]
length = X.shape[0]
for i in range(epochs):
output = (sigmoid((np.dot(weights, X.T)+bias)))
weights_tmp = (1/length) * (np.dot(X.T, (output - y.T).T))
bias_tmp = (1/length) * (np.sum(output - y.T))
weights -= (lr * (weights_tmp.T))
bias -= (lr * bias_tmp)
return weights, bias
SGD:
def stochastic_gradient(df, weights, bias, lr, epochs):
x_matrix = df.values
for i in range(epochs):
np.random.shuffle(x_matrix)
x_instance = x_matrix[np.random.choice(x_matrix.shape[0], 1, replace=True)]
y = x_instance[:,:1]
output = sigmoid(np.dot(weights, x_instance[:,1:].T) + bias)
weights_tmp = lr * np.dot(x_instance[:,1:].T, ((output - y)))
weights = (weights - weights_tmp.T)
bias -= lr * (output - y)
return weights, bias
python numpy pandas
add a comment |
up vote
1
down vote
favorite
I'm trying to implement logistic regression and I believe my batch gradient descent is correct or at least it works well enough to give me decent accuracy for the dataset I'm using. When I use stochastic gradient descent I'm getting really poor accuracy so I'm not sure if it's my learning rate, epochs or just my code is incorrect. Also I'm wondering how would I add regularization to both of these? Do I add a variable lambda and multiply it by the learning rate or is the more to it?
BGD:
def batch_gradient(df, weights, bias, lr, epochs):
X = df.values
y = X[:,:1]
X = X[:,1:]
length = X.shape[0]
for i in range(epochs):
output = (sigmoid((np.dot(weights, X.T)+bias)))
weights_tmp = (1/length) * (np.dot(X.T, (output - y.T).T))
bias_tmp = (1/length) * (np.sum(output - y.T))
weights -= (lr * (weights_tmp.T))
bias -= (lr * bias_tmp)
return weights, bias
SGD:
def stochastic_gradient(df, weights, bias, lr, epochs):
x_matrix = df.values
for i in range(epochs):
np.random.shuffle(x_matrix)
x_instance = x_matrix[np.random.choice(x_matrix.shape[0], 1, replace=True)]
y = x_instance[:,:1]
output = sigmoid(np.dot(weights, x_instance[:,1:].T) + bias)
weights_tmp = lr * np.dot(x_instance[:,1:].T, ((output - y)))
weights = (weights - weights_tmp.T)
bias -= lr * (output - y)
return weights, bias
python numpy pandas
add a comment |
up vote
1
down vote
favorite
up vote
1
down vote
favorite
I'm trying to implement logistic regression and I believe my batch gradient descent is correct or at least it works well enough to give me decent accuracy for the dataset I'm using. When I use stochastic gradient descent I'm getting really poor accuracy so I'm not sure if it's my learning rate, epochs or just my code is incorrect. Also I'm wondering how would I add regularization to both of these? Do I add a variable lambda and multiply it by the learning rate or is the more to it?
BGD:
def batch_gradient(df, weights, bias, lr, epochs):
X = df.values
y = X[:,:1]
X = X[:,1:]
length = X.shape[0]
for i in range(epochs):
output = (sigmoid((np.dot(weights, X.T)+bias)))
weights_tmp = (1/length) * (np.dot(X.T, (output - y.T).T))
bias_tmp = (1/length) * (np.sum(output - y.T))
weights -= (lr * (weights_tmp.T))
bias -= (lr * bias_tmp)
return weights, bias
SGD:
def stochastic_gradient(df, weights, bias, lr, epochs):
x_matrix = df.values
for i in range(epochs):
np.random.shuffle(x_matrix)
x_instance = x_matrix[np.random.choice(x_matrix.shape[0], 1, replace=True)]
y = x_instance[:,:1]
output = sigmoid(np.dot(weights, x_instance[:,1:].T) + bias)
weights_tmp = lr * np.dot(x_instance[:,1:].T, ((output - y)))
weights = (weights - weights_tmp.T)
bias -= lr * (output - y)
return weights, bias
python numpy pandas
I'm trying to implement logistic regression and I believe my batch gradient descent is correct or at least it works well enough to give me decent accuracy for the dataset I'm using. When I use stochastic gradient descent I'm getting really poor accuracy so I'm not sure if it's my learning rate, epochs or just my code is incorrect. Also I'm wondering how would I add regularization to both of these? Do I add a variable lambda and multiply it by the learning rate or is the more to it?
BGD:
def batch_gradient(df, weights, bias, lr, epochs):
X = df.values
y = X[:,:1]
X = X[:,1:]
length = X.shape[0]
for i in range(epochs):
output = (sigmoid((np.dot(weights, X.T)+bias)))
weights_tmp = (1/length) * (np.dot(X.T, (output - y.T).T))
bias_tmp = (1/length) * (np.sum(output - y.T))
weights -= (lr * (weights_tmp.T))
bias -= (lr * bias_tmp)
return weights, bias
SGD:
def stochastic_gradient(df, weights, bias, lr, epochs):
x_matrix = df.values
for i in range(epochs):
np.random.shuffle(x_matrix)
x_instance = x_matrix[np.random.choice(x_matrix.shape[0], 1, replace=True)]
y = x_instance[:,:1]
output = sigmoid(np.dot(weights, x_instance[:,1:].T) + bias)
weights_tmp = lr * np.dot(x_instance[:,1:].T, ((output - y)))
weights = (weights - weights_tmp.T)
bias -= lr * (output - y)
return weights, bias
python numpy pandas
python numpy pandas
edited Nov 26 at 4:19
Jamal♦
30.2k11115226
30.2k11115226
asked Nov 26 at 0:53
jj2593
61
61
add a comment |
add a comment |
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
Thanks for contributing an answer to Code Review Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcodereview.stackexchange.com%2fquestions%2f208412%2fbatch-gradient-descent-and-stochastic-gradient-descent%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown