Yahoo Web Search

Search results

  1. This is just a quick and condensed note on the basic definitions and characterizations of concave, convex, quasiconcave and (to some extent) quasiconvex functions, with some examples. Contents. Concave and convex functions 1. 1.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.

  2. It is also possible to characterize concavity or convexity of functions in terms of the convexity of particular sets. Given the graph of a function, the hypograph of f,

    • 227KB
    • 12
  3. Why do we need concavity and convexity? We will make the following important assumptions, denoted by CC: 1. The set Z is convex; 2. The function g is concave; 3. The function h is convex. Recall the de–nition of the set B : B = f(k;v) : k h(z);v g(z) for some z 2 Zg: Proposition under CC, the set B is convex Proof: suppose that (k 1;v 1) and ...

  4. In this lecture, we shift our focus to the other important player in convex optimization, namely, convex functions. Here are some of the topics that we will touch upon: Convex, concave, strictly convex, and strongly convex functions. First and second order characterizations of convex functions.

    • 1MB
    • 14
    • 1.1.2 Functions
    • 1.1.3 Suprema and in ma
    • 3.1 The alternating directions method of multipliers
    • n o
    • Basis pursuit.
    • 3.1.3 Example: consensus optimization
    • p = inf ff(x) + g(z) j Ax = Bzg
    • 3.2 The proximal point method
    • =) T T 0:
    • T g(z)
    • Proof:
    • 4.1.1 Regular subgradients
    • 4.1.2 Limiting subgradients
    • TS(x) \ rf(x) 1TP (f(x)) :

    A simple but very useful trick in convex analysis is to allow functions to take on values on the extended real line,

    For a nite set S R (meaning a set containing a nite number of real numbers), max S is the largest element of S and min S is the smallest. The supremum sup S and in mum inf S generalize the maximum and minimum for sets that aren't nite. The supremum and in mum are also known as the greatest lower bound and least upper bound, respectively.

    The alternating direction method of multipliers (ADMM) is a popular algorithm for large-scale optimization of Fenchel-type problems. It's a splitting method, meaning it operates on and g separately. There are many variants of ADMM; we'll discuss one. Consider linear A : E ! F, convex f : E ! R and g : F ! R, and the primal-dual pair = inf ff(x) + g...

    = inf hq; xi + b+L(x) + Rn (x) x where L is a subspace. This can be solved by setting f(x) = hq; xi + b+L(x) and g(x) = Rn (x) and applying ADMM. Similarly, linearly constrained quadratic programs can be + solved by replacing hq; xi by xT P x, where P 2 Sn +.

    The basis pursuit problem = inf fkxk + b+L(x)g x can be solved for billions of variables, since soft thresholding can be done in linear time. On this problem, ADMM typically proceeds in two phases. It starts by discovering the correct basis (i.e., which components of x should be zero), then it solves the linear system x 2 b+L in linear time. The rs...

    An example that gives nice intuition for ADMM, and particularly for its usefulness in dis-tributed problems, is the consensus optimization problem N

    x where A : E ! G and B : F ! G are linear and f : E ! R and g : F ! R are convex. The convergence proof and algorithm follow through for this formulation, with two minor changes: the augmented Lagrangian is

    Consider a closed proper convex f : E ! R and the problem

    It turns out that PPM works with @f replaced by any maximal monotone operator.

    is convex, and projecting onto Rm is easy (just take the componentwise positive part). It's + also easy to produce a subgradient of ( ), as the following proposition shows.

    Observe that ( ) h(z) T g(z) and ( ) = h(z) T g(z). Hence ) ( ( )

    Recall that for convex f : E ! R, y is a subgradient of f at x if for all z, f(x + z) f(x) + hy; zi: This subgradient inequality is a one-sided, global estimate of f(x+z). We used subgradients extensively in both duality theory and algorithms for convex optimization. For continuously di erentiable f : E ! R, we have the analogous result that y is t...

    The graph of the regular subdi erential is generally not closed. This is unsatisfying, since we'd like to take limits, e.g., to prove the convergence of algorithms. An object that allows this sort of analysis is the limiting subdi erential.

    Applying the Krein-Rutman polar cone calculus (a consequence of Fenchel duality) to the right-hand side, the optimality condition reduces to

    • 374KB
    • 56
  5. The function f is convex if for all x; y 2 X and for all 2 [0; 1], we have: f (x) is concave. If f (x) is convex, then af (x) is convex if a > 0. If f (x) and g (x) are convex, then h (x) = f (x) + g (x) is convex. If f (x) and g (x) are convex, then h (x) = f (x) g (x) is not necessar-ily convex.

  6. People also ask

  7. www.math.lamar.edu › faculty › maesumiConcavity - Lamar.edu

    The following notions of concavity are used to describe the increase and decrease of the slope of the tangent to a curve. In Figure 3.11, for example, the production curve was concave upward to the left of the point of diminishing returns and concave downward to the right of this point.

  1. People also search for