Hypothesis testing is a foundational process in scientific discovery and data driven decision making, yet it traditionally demands expert intervention and manual interpretation of both hypotheses and data. Tools like PSPP or IBM SPSS offer interfaces for analysis, but their integration requires analysts to translate natural language questions into formal statistical tests. On the other hand, recent advances in NLP and ML offer tools to automate elements of scientific analysis, but their integration for full-cycle hypothesis testing remains unsolved. This indicates that a significant gap exists in creating an integrated system that can automate this translation from human intent to statistical execution i.e. ability to interpret natural language hypotheses, align them with appropriate datasets, and execute relevant statistical or ML models without human input. Here I propose the development of a cognitive framework that synthesizes LLMs with a statistical decision engine to fully automate the hypothesis testing workflow. The system parses hypotheses into structured analytical intents using NLP Techniques, then maps them to structured data and accurately selects, executes the appropriate statistical test. The framework concludes by translating the technical results into a clear, human readable summary, replicating the outcome of a manual analysis. Using transformer-based models for semantic parsing and rule based statistical selection, we will demonstrate that our system can accurately validate causal and correlational hypotheses across benchmark datasets. This system's performance will be validated against benchmark datasets to ensure validity with expert-led analysis. This framework significantly reduces the cognitive load required for early-stage hypothesis evaluation, making exploratory research more scalable. The immediate implication is a significant acceleration of the research and discovery cycle across numerous fields.