The presence of bias in today's IR systems has raised concerns on the social responsibilities of IR. Fairness has become an increasingly important factor when building systems for information searching and content recommendations. Fairness in IR is often considered as an optimization problem where the system aims to optimize the utility, subject to a set of fairness constraints, or optimize fairness while guaranteeing a lower bound on the utility, or jointly optimize for both utility and fairness to achieve an overall satisfaction. While various optimization algorithms have been proposed along with theoretical analysis, in real world applications, the performance of diferent optimization algorithms often heavily depend on the data. Therefore, it is consequential to ask what is the solution space characterized by the data, what effect does introducing fairness bring to the system, and can we identify this solution space to help us trade-off different optimization policies and guide us to pick suitable algorithms and/or make adjustments on data? In this work, we propose a framework that offers a novel perspective into the optimization with fairness constraints problems. Our framework can efectively and efficiently estimate the solution space and answer such questions. It also has the advantage of simplicity, explainability, and reliability. Specifically, we derive theoretical expressions to identify the fairness and relevance bounds for data of different distributions, and apply them to both synthetic and real world datasets. We present a series of use cases to demonstrate how our framework is applied to facilitate various analyses and decision making.