Convex vs. Non-Convex Functions: Why it Matters in Optimization for Machine Learning

Ruman
7 min readApr 18, 2023
mathworks.com

Outlines

  • Introduction
  • Minima or maxima of a function
  • Loss minimization
  • Convex Function
  • Non-convex Function
  • Conclusion

Introduction

If you have been delving into the world of machine learning, you may have encountered the phrases “convex” and “non-convex” functions in the context of optimization algorithms and loss minimization. In the domain of machine learning, the loss function can either be a convex or non-convex function.

Convex and non-convex functions play an important role in machine learning, particularly in optimization problems where we need to find the minimum or maximum value of a loss function.

This article we will explore what convex and non-convex functions are and why they are important in machine learning.

Minima or maxima of a function

The maxima and minima of a function refer to the highest and lowest points, respectively, on the graph of the function.

In mathematical terms, we can find the maxima and minima of a function by taking the derivative of the function and setting it equal to zero. The points where the derivative equals zero are known as critical points. We then analyze the behaviour of the function near these critical points to determine whether they are maxima or minima.

To gain a good understanding of convex and non-convex functions, it is crucial to have a grasp of concepts such as local minima, global minima, and saddle points.

f(x) a non-convex function with Local minima, Global minima and a Saddle point

Local minima

Local minima refer to a situation where the optimization algorithm finds a set of model parameters that correspond to the minimum value of the loss function in a small region of the parameter space.

However, this minimum value may not be the global minimum of the loss function, which corresponds to the smallest value of the loss function across the entire parameter space.

Global minima

Global minima are the absolute lowest points of the loss function, which correspond to the optimal set of parameters for a model. The goal of any optimization algorithm is to find the global minimum, which will produce the best results for the given problem.

Saddle point

A saddle point is a point in the parameter space where the loss function has a minimum value in one direction and a maximum value in another direction. At a saddle point, the gradient of the loss function is zero, which means that the optimization algorithm may get stuck and not be able to converge to the global minimum.

Saddle points can be problematic for optimization algorithms, and various methods have been proposed to deal with them, such as momentum-based methods and stochastic gradient descent.

Loss minimization

metaphysic.ai

The term “loss” in machine learning refers to the discrepancy or error between the predicted output of a model and the actual output. In other words, it represents the difference between what the model expected to see and what it actually observed. The loss of a machine learning model is calculated with the help of a loss function, and a lower loss value is indicative of a better model performance. Therefore, it is crucial to develop a machine learning model with minimal loss to improve its accuracy.

To achieve minimal loss, we need to adjust the model’s parameters. The goal of loss minimization is to reduce the overall error or loss of the model. This process involves using optimization algorithms like gradient descent, which iteratively modifies the model’s parameters to minimize the loss. These optimization algorithms are critical in machine learning because they help to minimize loss, and choosing the right algorithm is crucial.

When selecting an optimization algorithm, it is essential to consider whether the loss function is convex or non-convex. A convex loss function has only one global minimum and no local minima, making it easier to solve with a simpler optimization algorithm. However, a non-convex loss function has both local and global minima and requires an advanced optimization algorithm to find the global minimum.

By now, you should have a clear understanding of why distinguishing between convex and non-convex loss functions is essential and how this knowledge can benefit you as a machine learning practitioner.

In the following sections, we will delve deeper into each of these concepts to enhance your understanding further.

Convex function

Convex functions are particularly important because they have a unique global minimum. This means that if we want to optimize a convex function, we can be sure that we will always find the best solution by searching for the minimum value of the function. This makes optimization easier and more reliable.

Graph of a convex function : https://en.wikipedia.org/wiki/Convex_function

From more mathematical point of view

A function f(x) is convex if for any two points x1 and x2 in the domain of f(x), and for any “t” in the range [0,1], the following condition must holds true:

A Convex Function must hold this formulation

In simpler terms, this means that the line segment between any two points on the graph of the function lies above or on the graph of the function, and not below it.

Another way to visualize this property is to imagine a bowl-shaped curve where the function’s value increases as you move away from the bottom of the bowl. This curvature ensures that there is only one global minimum, and no local minima, which makes optimization simpler.

A lot of the common loss functions, including the following, are convex functions:

  • L2 loss
  • Log Loss
  • L1 regularization
  • L2 regularization

Non-convex function

A function is said to be non-convex if it is not convex. Geometrically, a non-convex function is one that curves downwards or has multiple peaks and valleys. Looks something like this :

Graph of a non-convex function

The challenge with non-convex functions is that they can have multiple local minima, which are points where the function has a lower value than all neighboring points.

This means that if we try to optimize a non-convex function, we may get stuck in a local minimum and miss the global minimum, which is the optimal solution we are looking for.

From more mathematical point of view

A function f(x) is non-convex if for any two points x1 and x2 in the domain of f(x), and for any “t” in the range [0,1], such that:

A Non-convex Function must hold this formulation

In other words, the line segment between two points on the graph of the function can lie below the graph in some places, creating hills and valleys.

Let’s understand this above statement in simple terms

Imagine a function that has a “hill” in the middle, like a bell curve. If we choose two points on opposite sides of the hill, then the line segment between these points will intersect the hill at some point. Now, if the height of the function at this intersection point is less than the height of the line segment at the same point, then the function is non-convex.

This means that there exist points on the function where the curve dips down below the straight line connecting two other points. This is what creates the hills and valleys in the function, and can make it more difficult to find the global minimum.

A lot of the common loss functions, including the following, are non-convex functions:

  • Binary or Categorical cross-entropy loss function
  • Adversarial loss function in generative models

Conclusion

Convex and non-convex functions are important concepts in machine learning, particularly in optimization problems. Convex functions have a unique global minimum, making optimization easier and more reliable. Non-convex functions, on the other hand, can have multiple local minima, making optimization more challenging.

When working with non-convex functions, it is important to use optimization algorithms that can help us avoid getting stuck in local minima. For example, gradient descent with random initialization and annealing can help us find good solutions even in non-convex optimization problems.

In summary, understanding the difference between convex and non-convex functions is essential for machine learning practitioners who want to optimize functions efficiently and reliably.

Refrence

--

--

Ruman

Senior ML Engineer | Sharing what I know, work on, learn and come across :) | Connect with me @ https://www.linkedin.com/in/rumank/