This thesis explores inductive biases in the context of deep learning for pixel-level tasks. An inductive bias allows a learning algorithm to prioritize one solution over another, and to generalize beyond the training data. Inductive biases have been explored for deep learning, and this is what in part contributed to its success. Besides the well-known explicit inductive biases like L1 & L2 regularization, there are also many implicit inductive biases, which are introduced by implicit regularization such as optimization algorithms, dropout, attention mechanism, transfer learning. These implicit inductive biases have been showing remarkable ability to help generalization in deep learning. Different from explicit inductive biases, however, it is hard to explicitly identify and derive a formulation of implicit bias. Also, there is no direct way to control the strength of an implicit bias. As a result, it is difficult to apply an implicit bias in practice. This thesis strives to turn the implicit biases into an explicit form to uncover and exploit them for pixel representation learning. We have uncovered and exploited three implicit biases for different pixel-level tasks, including the spectral bias for the deep image prior, the salience bias for guided filtering, and the attentional bias for tiny object localization and counting. In addition, this thesis also seeks to develop new inductive biases by exploiting prior knowledge. We have developed three inductive biases for best-in-class object counting by discovering new knowledge.