The paper introduces a new approach to multi-fidelity optimization. The approach employs gradient-based optimization, where the one-dimensional search points are evaluated using high-fidelity analysis, while the gradients are evaluated using low-fidelity analysis. Correlation between the results of the high- and low-fidelity analyses is not required. The approach is demonstrated using two example problems. Computational savings in terms of time and the number of high-fidelity analyses are discussed.