
When the global research is mostly hovering around the semiconductor chips, the new trend is solution research. It is elegantly true for methodological solution for new age social science researches, which is apparently complex, multidimensional, yet enthralling. We are now blending physics and mathematics to social science researches to make it more concrete, predictable, amenable to social ecology and adaptable to change kinetics.
The book has covered almost all classical and modern mathematical and statistical solutions to research and prediction. Perhaps this is the solitary book that illustriously compounded an array of linear and non-linear analytical tools and techniques by incorporating ANN, FCM, CCA, AHP and very essentially grounded theory, Meta analysis, game theory and participatory data analyses. I hope and believe, the book is going to add ripples and resurrection in the young minds and creativities for making research the more of an excitement and, less of monotony and rigmarole for the global audience.
The new education policy is aiming at executing innovative thinking into attainable research acumen. This is so extremely relevant when the global research is mostly hovering around the solution research. It is elegantly true for methodological solution for new age social science researches, which is apparently complex, multidimensional, yet enthralling. We are now blending physics and mathematics to social science researches to make it more concrete, predictable, amenable to social ecology and adaptable to change kinetics. The book has covered almost all classical and modern mathematical and statistical solutions to research and prediction. Perhaps this is the solitary book that illustriously compounded an array of linear and non-linear analytical tools and techniques by incorporating ANN, FCM, CCA, AHP and very essentially grounded theory, Meta analysis, game theory and participatory data analyses. I hope and believe, the book is going to add ripples and resurrection in the young minds and creativities for making research the more of an excitement and, less of monotony and rigmarole for the global audience. When global audience across disciplines needs a comprehensive book on research methodology, no wonder the book, Research Methodology: Process, Techniques and Application, is going to be of huge utility and applicability to one and all.
1.1. Introduction Research in simple terms refers to search for knowledge. It is a scientific and systematic search for information on a particular topic or issue. It is also known as the art of scientific investigation. Several social scientists have defined research in different ways. In the Encyclopaedia of Social Sciences, D. Slesinger and M. Stephenson (1930) defined research as “the manipulation of things, concepts or symbols for the purpose of generalizing to extend, correct or verify knowledge, whether that knowledge aids in the construction of theory or in the practice of an art”. According to Redman and Mory (1923), research is a “systematized effort to gain new knowledge”. It is an academic activity and therefore the term should be used in a technical sense. According to Clifford Woody (Kothari, 1988), research comprises “defining and redefining problems, formulating hypotheses or suggested solutions; collecting, organizing and evaluating data; making deductions and reaching conclusions; and finally, carefully testing the conclusions to determine whether they fit the formulated hypotheses”
Statistics is a branch of science which deals with the collection, classification, description and interpretation of data obtained by conducting surveys and experiments. The essential purpose of it is to describe and draw valid inferences about numerical properties of populations” (Ferguson 1966). The experience in statistics application indicates that a single statistical method can be used in different research areas for dealing with different types of problems. Various possibilities which application of a statistical method provides in certain research areas should be considered as an adequate way of dealing with a problem of the research. However, it should be noted that statistics is not a method by which it is possible to solve all the problems in a research. Nachmias and Nachmias (2009) stated that there are basically two types of statistics which include descriptive and inferential statistics. Descriptive statistics enable the researcher to summarize and organize data in an effective and meaningful way. It involves the use of tables, charts, graphs, mean, modes, median, standard scores and correlation to treat collected data. Inferential statistics is concerned with making inferences from a unit of a population. Inferential statistics is concerned with making inferences from a unit of a population. Inferential statistics allow the researcher to make decisions or inferences by interpreting data patterns. Researchers use inferential statistics to determine whether an expected pattern designated by the theory and hypotheses is actually found in the observations. To decide whether this hypothesis is true, researchers might survey the respondents and then use descriptive statistics to make comparison between these groups and would employ inferential statistics to determine whether the differences between the groups are significant or not.
In statistics, sampling is concerned with the selection of a subset of individuals from within a statistical population to estimate characteristics of the whole population. Two advantages of sampling are that the cost is lower and data collection is faster than measuring the entire population. Each observation measures one or more properties (such as weight, location, colour) of observable bodies distinguished as independent objects or individuals. In survey sampling, weights can be applied to the data to adjust for the sample design, particularly stratified sampling. Results from probability theory and statistical theory are employed to guide the practice. In business and medical research, sampling is widely used for gathering information about a population. Acceptance sampling is used to determine if a production lot of material meets the governing specifications. The sampling process comprises several stages • Defining the population of concern • Specifying a sampling frame, a set of items or events possible to measure • Specifying a sampling method for selecting items or events from the frame • Determining the sample size • Implementing the sampling plan • Sampling and data collecting
Multivariate analysis is essentially the statistical process of simultaneously analyzing multiple independent variables with multiple dependent (outcome or criterion) variables using matrix algebra. During the last two or three decades, multivariate statistical analysis has become increasingly popular. The theory has made great progress, and with the rapid advances in computer technology, routine applications of multivariate statistical methods are implemented in several statistical software packages. The traditional approach to the teaching of multivariate statistical analysis, as exemplified by Anderson (1958), relies heavily on advanced matrix mathematics. On the other hand, Hury and Riedwyl (1988) suggest that it is possible to understand most of the basic ideas underlying multivariate statistical analysis without a mastery of such mathematics, provided that these are conveyed with the help of real data sets. In this chapter we propose a non-mathematical data-driven approach for teaching multivariate statistical methods to students. Despite this, we are mindful of the need for students to know some basic linear algebra and univariate statistical concepts. Such basic knowledge provides students with the foundation necessary for the application of the appropriate multivariate statistical procedures and for the interpretation of results.
A linear programming problem aims to maximize or minimize a linear function subject to the linear constraints. But in many cases, the objective function may not be linear or all/some of the constraints may not be linear or both the objective and the constraints may not be linear. Such an optimization problem is called a nonlinear programming problem (NLPP). In this chapter, we discuss about NLPP’s. Nonlinear programming is the process of solving an optimization problemdefined by a system of equalities and inequalities, collectively termed constraints, over a set of unknown real variables, along with an objective function to be maximized or minimized, where some of the constraints or the objective function are nonlinear. It is the sub-field of mathematical optimization that deals with problems that are not linear. A typical non-convex problem is that of optimizing transportation costs by selection from a set of transportation methods, one or more of which exhibit economies of scale, with various connectivity and capacity constraints. An example would be petroleum product transport given a selection or combination of pipeline, rail tanker, road tanker, river barge. Owing to economic batch size the cost functions may have discontinuities in addition to smooth changes. In experimental science, some simple data analysis (such as fitting a spectrum with a sum of peaks of known location and shape but unknown magnitude) can be done with linear methods, but in general these problems, also, are nonlinear.
Statistics as a wide subject is used in almost all disciplines especially in research studies. Statistics provides scientific tools for representative data collection, appropriate data analysis and summarization of data and inferential procedure for drawing valid conclusion in the face of uncertainty. Each and every researcher should have some knowledge in Statistics and must use statistical tools in his or her research, one should know about the importance of statistical tools and how to use them in their research or survey. The quality assurance of the work must be dealt with: the statistical operations necessary to control and verify the analytical procedures as well as the resulting data making mistakes in analytical work is unavoidable. The emergence of statistical software in the twenty-first century has helped different researchers in the physical and social science to improve in the quality of research. Most renowned researchers in adopting this software in their data analysis have been able to identify the immense contribution to research findings (Adetola, 2013). Any quantitative research cannot be done effectively without Statistical Software. Moreover, it enables research data for easy presentation. It helps professionals to interact with data thereby paving way for creativity and innovation. Some are user friendly interface with drop-down tips for beginners (ATS, Ucla Edu, 2014). Advances in technology have improved all our lives (Akindutire, 2013) and has allowed experts greater freedom to come out with results within a twinkle of an eye than ever before where it takes time to finish analysis. This same technology has offer tremendous opportunity to research and keep research as a more interesting field of study. This study is significant, because it is an attempt to measure the direct impact of different statistical software on research analysis.
7.1. People’s Participation in Observation, Analysis and Planning for Rural Development Why should people participate? There is no controversy as to whether there should be people’s participation in the development programmes in the development programmes designed to meet the basic needs of the people. The post-mortem of any unsuccessful programmes has shown that they have invariably failed when they failed to provide an adequate place and importance to people’s participation in the planning and analysis. There is a body of philosophy and a set of assumptions behind people’s participation in rural development. These are; i) It should be people’s programme with government’s participation; ii) Community needs to build up their own capacity, agents have got a catalytic role; iii) Self- imposed changes have got permanence as compared to those imposed from outsides iv) Holistic approaches are far better than the fragmented approaches; v) People need certain help to solve unique problems;
8.1. Introduction Sometimes writing reference list could be so confusing since there are a lot of different rules. Each universities or even each faculty use different rule in writing reference list. The different rule of writing reference list is normal because they refer to different style guide. There are a lot of styles guide available. MLA (Modern Language Association of America) style; the Chicago Manual of Style; the Oxford Guide to Style are some examples of style guide. Difference reference of style guides chosen will result in totally different way of writing reference list. The most widely accepted style guides in social science is the APA (American Psychological Association) style. In some higher education journals (e.g. Journal of Studies in International Education, Higher Education) it is stated clearly that author must submit their paper according to APA style. Referencing in a general sense means to give credit to someone for using his or her own ideas or thoughts in a research activity. Referencing helps in gaining the originality of the ideas and thoughts used in the research activity. Failure to reference is treated as disrespect to the original author or writer and seen as a major misconduct in the area of academic research writing. Generally students made the mistake of not mentioning proper referencing at the end of their research projects, essays or any other piece of work. This may lead to cancellation of the written matter. If we do not reference properly the written matter is treated as if it is copied from somewhere. The research paper or material cannot be submitted for further research or reading purpose. We find a lot of data when we search for something.
9.1. Introduction The human population of the world will reach nearly 10 billion by 2050, boosting agricultural order-in a situation of humble financial development (FAO, 2017). At present, approximately 37.7% of total land surface is being utilized for crop production. From employment generation to contribution to National Income, agriculture sector is contributing a significant role in economic prosperity of the developed nations and is playing an active part in the economy of the developing countries as well. The growth of agriculture has resulted in a significant increase in the per-capita income of the rural community. Thus, a greater emphasis on agricultural sector is the need of the hour. For countries, like India, the agricultural sector accounts for 18% of GDP and provides employment to 50% of the country’s workforce. Development in the agricultural sector will enrich the rural development which will lead toward rural transformation and ultimately the desired structural transformation (Mogili and Deepak, 2018; Shah et al., 2019). The role of Artificial Intelligence (AI) has been obvious in the agricultural sector recently. The sector needs the impetus from AI based technology to maximize its yield including improper soil treatment, disease and pest infestation, big data requirements, low output, and knowledge gap between farmers and technology.
10.1. Introduction Decision-making is a fundamental aspect of human life, influencing choices in both personal and professional settings. Every individual, whether consciously or unconsciously, engages in decision-making daily, ranging from simple tasks like selecting a meal to complex strategic planning in organizations. However, decision-making becomes increasingly challenging when multiple conflicting criteria are involved. The presence of numerous factors, each with varying degrees of importance, often creates ambiguity and uncertainty, making it difficult to arrive at the most suitable option. To address these challenges, structured decision-making methodologies have been developed to provide a systematic and logical framework for evaluating alternatives. One such widely recognized method is the Analytical Hierarchy Process (AHP), introduced by Thomas L. Saaty in the 1970s. AHP was designed to assist decision-makers in breaking down complex problems into a structured and systematic hierarchy. This approach allows for a step-by-step evaluation of different factors, making it easier to analyze their relative importance. By structuring a problem into different levels - such as goal, criteria, sub-criteria, and alternatives - AHP provides clarity in decision-making and enhances consistency in judgments. The method is particularly beneficial when dealing with decisions that involve both tangible and intangible factors, as it allows subjective preferences to be quantified systematically (Saaty, 2008).
11.1. Introduction Fuzzy Cognitive Mapping (FCM) is a powerful soft computing technique that integrates elements of fuzzy logic and cognitive mapping to model complex systems. It allows for the representation of causal relationships between variables in an interpretable and flexible manner, making it particularly valuable for analyzing and simulating real-world dynamic systems. FCM has its roots in cognitive science, systems thinking, and artificial intelligence, enabling researchers and practitioners to model uncertainties and interdependencies effectively (Tyrovolas et al., 2023). The approach is particularly advantageous when dealing with complex, ill-structured problems where traditional mathematical modeling techniques fall short. By capturing expert knowledge and combining it with computational techniques, FCM helps in making informed predictions and policy decisions. It is widely applied across multiple domains, including environmental management for sustainability assessments (Özesmi and Özesmi, 2004), decision support systems in business and governance, healthcare for medical diagnostics and treatment planning (Apostolopoulos et al., 2024), and socio economic planning for policy analysis and forecasting (Gray et al., 2015). Furthermore, its adaptability has led to its increasing integration with artificial intelligence, machine learning, and big data analytics, making it a promising approach for tackling contemporary challenges.
12.1. Introduction Artificial Neural Networks (ANNs) are computational models inspired by the way biological neural networks in the human brain process information. They consist of interconnected nodes or “neurons” arranged in layers: an input layer, one or more hidden layers, and an output layer. Each neuron in the network processes data and passes the results to the next layer. ANNs are designed to recognize patterns, learn from data, and make predictions or classifications based on that learning. In an ANN, the connections between neurons are weighted, meaning each connection has a strength or value that determines the influence of one neuron on another. Through a process called training, the network adjusts these weights to minimize errors in its predictions. This is typically done using techniques like backpropagation, where the error is propagated backward through the network to update the weights (Abiodunet al., 2018). ANNs are highly versatile and have been applied across various domains, including image and speech recognition, natural language processing, and decision-making systems. Their ability to model complex relationships in large datasets makes them a crucial component in the field of machine learning and artificial intelligence (Ahmed et al., 2023).
13.1. Introduction The Delphi technique is a systematic, interactive forecasting method that involves gathering expert opinions to make informed decisions, predict future trends, or address complex problems. The core idea behind this technique is to use the knowledge and insights of experts in a structured manner to produce well-informed and reliable outcomes. By combining expert judgment with iterative feedback, the Delphi method allows for deeper analysis, greater accuracy, and a better understanding of complex issues that cannot be easily solved through traditional data analysis or quantitative methods (Ismail and Taliep, 2023). Originally designed for defence-related forecasting, the Delphi technique has since evolved into a versatile tool used across a wide range of fields, such as business, healthcare, education, public policy, environmental science, and social research. Its versatility stems from its ability to gather expert opinions in a manner that allows for rigorous analysis of complex issues, without the constraints of traditional group meetings or face-to-face discussions.
Project managers need to make sense on the many data that they have. Analytical tools are used in project management to achieve such need. Such tools are used to create a forecast of potential outcomes based on the variations present in the environmental and project variables. There are different types of analytical tools used and one of the most common tools is regression analysis. 14.1. Multiple Regression Concept: Multiple regression is the most commonly utilized multivariate technique. It examines the relationship between a single metric dependent variable and two or more metric independent variables. Multiple regressions are often used as a forecasting tool. Goal is to use the linear composite of two or more continuous and/or categorical variables (predictors) to: 1) predict scores on a single continuous variable (criterion), or to 2) explain the nature of the single continuous criterion variable from what is known about the predictor variables. In prediction, the criterion is the main emphasis because decisions are made on its value, but often times, it is difficult to directly measure or obtain a subject’s actual score on the criterion; therefore, it is important to estimate or predict one’s criterion score based on the value of the predictor scores.
15.1. Introduction Meta-analysis is a statistical technique used to combine the results of multiple studies to produce a single, more precise estimate of the effect size of a particular intervention or association. It is a powerful tool in research, particularly in f ields such as medicine, psychology, education, and social sciences, where it is often necessary to synthesize evidence from numerous studies to draw meaningful conclusions. This chapter provides a comprehensive overview of meta-analysis, including its history, methodology, advantages, limitations, and applications. 15.2. Historical Background The concept of meta-analysis dates back to the early 20th century, but it was not until the 1970s that the term was coined by Gene V. Glass. Glass defined meta-analysis as “the statistical analysis of a large collection of analysis results from individual studies for the purpose of integrating the findings” (Glass, 1976). Since then, meta-analysis has evolved significantly, with advancements in statistical methods and software making it more accessible and robust. 15.3. The Need for Meta-Analysis 15.3.1. Accumulation of Evidence In many fields, particularly in medicine and psychology, a large number of studies are conducted on the same or similar research questions. However, individual studies may have small sample sizes, leading to low statistical power and inconclusive results. Meta-analysis allows researchers to pool data from multiple studies, increasing the overall sample size and statistical power, thereby providing a more reliable estimate of the effect size.
16.1. Introduction Game theory is a mathematical framework designed for understanding and analyzing strategic interactions among rational decision-makers. It has applications in various fields such as economics, political science, psychology, biology, and computer science. The foundational premise of game theory is that the outcome of a decision made by one agent depends on the decisions made by others. This chapter provides a comprehensive overview of game theory, covering its fundamental concepts, types of games, solution concepts, and applications. 16.2. Fundamental Concepts 16.2.1. Players In game theory, a player is any individual or entity that makes decisions within the context of the game. Players are assumed to be rational, meaning they aim to maximize their own utility or payoff based on their preferences and the information available to them.
17.1. Introduction Grounded Theory (GT) is a systematic methodology in the social sciences that involves the construction of theories through methodical gathering and analysis of data. Developed by Barney Glaser and Anselm Strauss in 1967, GT is widely used in qualitative research to generate theories that are grounded in empirical data. Unlike traditional research methods that test hypotheses, GT allows theories are to emerge from the data itself, making it a valuable tool for exploring complex social phenomena. This chapter provides an in-depth exploration of Grounded Theory, including its origins, key principles, methodologies, applications, and criticisms. By the end of this article, readers will have a thorough understanding of GT and its significance in qualitative research.
