Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Take Advantage of warm start after compute_backward #333

Open
yyylllaaafff opened this issue Jun 16, 2024 · 3 comments
Open

Take Advantage of warm start after compute_backward #333

yyylllaaafff opened this issue Jun 16, 2024 · 3 comments

Comments

@yyylllaaafff
Copy link

Hi,

Thanks for this powerful suite.
I got a QP initiated do solve and compute_backward like below,

qp = proxsuite.proxqp.dense.QP(nz, neq, nineq)
qp.init(...) # init qp here ...
qp.solve()
proxsuite.proxqp.dense.compute_backward(qp,...)
qp.update(...) #t hen update and solve again
qp.solve() # how can we take advantage of warm start here?

@fabinsch
Copy link
Collaborator

Hi @yyylllaaafff ,

thanks for your interest in proxsuite. Indeed, we have an option for warm starting the solver:

qp = proxsuite.proxqp.dense.QP(nz, neq, nineq)
qp.settings.initial_guess = qp.settings.initial_guess = proxsuite.proxqp.WARM_START_WITH_PREVIOUS_RESULT
qp.init(...)
qp.solve()
qp.update(...) 
qp.solve() # this should be faster now as we reuse the primal and dual vars as initial guess + the factorization might be reused depending on which data you updated

did you try this? Does the call to compute_backward lead to some unexpected behavior for you? Thanks for providing more details to understand your problem about the warm starting better.

@yyylllaaafff
Copy link
Author

Here is an example,

import proxsuite
import numpy as np
import scipy.sparse as spa


def generate_mixed_qp(n, seed=1):
    # A function for generating sparse random convex qps in dense format

    np.random.seed(seed)
    n_eq = int(n / 4)
    n_in = int(n / 4)
    m = n_eq + n_in

    P = spa.random(
        n, n, density=0.075, data_rvs=np.random.randn, format="csc"
    ).toarray()
    P = (P + P.T) / 2.0

    s = max(np.absolute(np.linalg.eigvals(P)))
    P += (abs(s) + 1e-02) * spa.eye(n)
    P = spa.coo_matrix(P)
    q = np.random.randn(n)
    A = spa.random(m, n, density=0.15, data_rvs=np.random.randn, format="csc").toarray()
    v = np.random.randn(n)  # Fictitious solution
    u = A @ v
    l = -1.0e20 * np.ones(m)

    return P.toarray(), q, A[:n_eq, :], u[:n_eq], A[n_in:, :], u[n_in:], l[n_in:]


# load a qp object using qp problem dimensions
n = 10
n_eq = 2
n_in = 2
qp = proxsuite.proxqp.dense.QP(n, n_eq, n_in)
# generate a random QP
H, g, A, b, C, u, l = generate_mixed_qp(n)
# initialize the model of the problem to solve
qp.settings.compute_timings = True
qp.init(H, g, A, b, C, l, u)
# solve without warm start
qp.solve()
res = qp.results
print(f'Solve without warm_start: {res.info.setup_time:.4}, {res.info.solve_time:.4}, {res.info.run_time:.4}')


# create a new problem and update qp
g_new = 0.95 * g  # slightly different g_new
qp.settings.initial_guess = (
    proxsuite.proxqp.InitialGuess.WARM_START_WITH_PREVIOUS_RESULT
)
qp.update(g=g_new)
# solve it with warm start
qp.solve()
res = qp.results
print(f'Solve with warm_start: {res.info.setup_time:.4}, {res.info.solve_time:.4}, {res.info.run_time:.4}')


# compute backward and then update qp
rhs = np.concatenate([res.x, res.y, res.z])
proxsuite.proxqp.dense.compute_backward(
        qp=qp,
        loss_derivative=rhs,
        eps=1e-4,
        rho_backward=1e-6,
        mu_backward=1e-6)
# create a new problem and update qp
g_new2 = 0.95 * g_new  # slightly different g_new
qp.settings.initial_guess = (
    proxsuite.proxqp.InitialGuess.WARM_START_WITH_PREVIOUS_RESULT
)
qp.update(g=g_new2)
# solve it with warm start after backward
qp.solve()
res = qp.results
print(f'Solve with warm_start after compute_backward: {res.info.setup_time:.4}, {res.info.solve_time:.4}, {res.info.run_time:.4}')

And what we got is,

Solve without warm_start: 50.58, 146.2, 196.8
Solve with warm_start: 0.0, 9.698, 9.698
Solve with warm_start after compute_backward: 0.0, 71.11, 71.11

As you can see, the solve_time increased a lot if we compute_backward before warm_start solve.
what can we do here?

@fabinsch
Copy link
Collaborator

Thanks for providing this example, and yes I see your point.

I think that currently there is no easy way around. The reason is that we reuse the qp inside of the function compute_backward, as you can see here. We are actually solving a new qp inside the backward function, and therefore the cached factorization etc do not correspond anymore to your outer qp where you are just modifying g slightly.

We should think about a proper way to fix this @Bambade and @jcarpent.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants