Sparse reconstruction algorithms aim to retrieve high-dimensional sparse signals from a limited number of measurements. A common example is LASSO or Basis Pursuit where sparsity is enforced using an ℓ1-penalty together with a cost function ||y - Hx||22. For random design matrices H, a sharp phase transition boundary separates the 'good' parameter region where error-free recovery of a sufficiently sparse signal is possible and a 'bad' regime where the recovery fails. However, theoretical analysis of phase transition boundary of the correlated variables case lags behind that of uncorrelated variables. Here we use replica trick from statistical physics to show that when an N-dimensional signal x is K-sparse and H is M × N dimensional with the covariance E[Hia Hjb] = 1/M CijDab, with all Daa = 1, the perfect recovery occurs at M ∼ φK(D)K log(N/M) in the very sparse limit, where φK(D) ≥ 1, indicating need for more observations for the same degree of sparsity.