We present a framework of algorithms for the solution of continuous optimization and variational inequality problems. In the general algorithm, a search direction finding auxiliary problem is obtained by replacing the original cost function with an approximating monotone cost function. Alternately, a line search is made in the search direction in order to obtain a decrease in a merit function. The proposed framework encompasses algorithm classes presented earlier by Cohen, Dafermos, Migdalas, and Tseng, and includes numerous descent and successive approximation type methods, such as Newton methods, Jacobi and Gauss-Seidel type decomposition methods for problems defined over Cartesian product sets, and proximal point methods, among others. The auxiliary problem of the general algorithm also induces equivalent optimization reformulations and descent methods for asymmetric variational inequalities. We study the convergence properties of the general algorithm when applied to unconstrained optimization, nondifferentiable optimization, constrained differentiable optimization, and variational inequalities; the emphasis of the convergence analyses is placed on basic convergence results, convergence using different line search strategies and truncated subproblem solutions, and convergence rate results. This analysis offers a unification of known results; moreover, it provides strengthenings of convergence results for many existing algorithms, and indicates possible improvements of their realizations.