In the following we state first and second-order conditions which ensures convexity of a function f . We write D_f to denote the domain of f , i.e the subset of R^n where f is defined. For more details and proofs we refer to: S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press.
Suppose f is differentiable (i.e \nabla f(x) is well defined for all x in the domain of f ). Then f is convex if and only if D_f is a convex set and f(y) \geq f(x) + \nabla f(x)^T (y-x) holds for all x,y \in D_f . This condition means that for a convex function the first order Taylor expansion (right hand side above) at any point a global under estimator of the function. To convince yourself you can make a drawing of f(x) = x^2+1 and draw the tangent line to f(x) and note that it is always below the graph.
Assume that f is twice differentiable, i.e the Hessian matrix exists at each point in D_f . Then f is convex if and only if D_f is a convex set and its Hessian is positive semi-definite for all x\in D_f . For a single-variable function this reduces to f''(x) \geq 0 . Geometrically this means that f has nonnegative curvature everywhere.
This condition is particularly useful since it gives us an procedure for determining if the function under consideration is convex, apart from using the definition.