Bonafide Research employs comprehensive and iterative research methodology focused on minimizing deviance in order to provide the most accurate estimates and forecast possible. The company utilizes a combination of bottom-up and top-down approaches for segmenting and estimating quantitative aspects of the market. In addition, a recurr.ing theme rampant across all our research reports is data triangulation that looks at the market from different perspectives.

Vital elements of methodology employed for all of our market research reports includes,

PRIMARY DATA COLLECTION

Our primary data collection includes exhaustive personal interviews, both face to face as well as telephonic basis, to extract maximum information from participants like industry executives/ distributors or consumers. It gives us an idea of their revenues, export, pricing, geographical presence, channel partner model, USP etc. Interviews are approached to leading companies across the value chain including manufacturers, dealers and distributors so as to ensure a holistic and unbiased picture of the market. These interviews are conducted across the nation, with language barriers being overcome with the aid of local staff and interpreters. Primary interviews not only help us in understanding the market more deeply, but also provide critical insights on the niche product categories and their current business scenario. This first hand information enhances the quality of our reports and minimizes the possibility of error.

SECONDARY RESEARCH

Our secondary data collection is about analyzing a vast array of information. Bonafide Research seeks information from third-party sources such as published articles, company websites, magazine articles, associations, trade journals, annual reports, government official websites and other paid database sources. In addition, data is also mined from a host of reports in our repository, as well as a number of paid databases of Indian government.

ANALYSING THE COLLECTED DATA

Raw market data collected through primary and secondary sources is obtained and collated on a broad front. Data is continuously filtered to ensure that only validated and authenticated sources are considered. Data analyzing involves critical analysis and interpretation of figures and numbers, and attempts to find rationale behind the emergence of main findings. Industry dynamics with respect to drivers, restraints, pricing trends are also analyzed simultaneously. As a result, the material developed contains a wide range of original data that is then further cross-validated and authenticated with published sources.

FORECASTING THROUGH STATISTICAL MODELS

Our market estimates and forecasts are derived after applying various statistical methods like moving average, single exponential smoothing, double exponential smoothing, decomposition, auto-correlation, ARIMA etc. Data is fed into our software and a unique model is created for each study. Also, gathered information of market dynamics, trends, challenges and pricing is taken into consideration while deciding upon the model. These factors are studied on a comparative basis, and their impact over the forecast period is quantified with the help of correlation, regression and time series analysis. Market forecasting is then validated and cross-checked via a combination of economic tools, industry experience and domain expertise. This is critical for a deep understanding of our estimates as well as ensuring minimal errors.

Subscribe