I want to implement a logistic regression with dropout regularization but so far the only working example is the following:

class logit(nn.Module): def __init__(self, input_dim = 69, output_dim = 1): super(logit, self).__init__() # Input Layer (69) -> 1 self.fc1 = nn.Linear(input_dim, input_dim) self.fc2 = nn.Linear(input_dim, 1) self.dp = nn.Dropout(p = 0.2) # Feed Forward Function def forward(self, x): x = self.fc1(x) x = self.dp(x) x = torch.sigmoid(self.fc2(x)) return x

Now the problem of setting dropout in between layers is that at the end I do not have a logistic regression anymore (correct me if I’m wrong).

What I would like to do is drop out at the input level.

## Answer

Actually, you still have a logistic regression with the dropout as it is.

The dropout between `fc1`

and `fc2`

will drop some (with `p=0.2`

) of the `input_dim`

features produced by `fc1`

, requiring `fc2`

to be robust to their absence. This fact doesn’t change the logit at the output of your model. Moreover, remember that at test time, (usually) the dropout will be disabled.

Note that you could also apply dropout at the input level:

def forward(self, x): x = self.dp(x) x = self.fc1(x) x = self.dp(x) x = torch.sigmoid(self.fc2(x))

In this case, `fc1`

would have to be robust to the absence of some of the input features.