Skip Nav

Asia Research Centre

Information Page

❶Theory construction and model-building skills. Research in Nursing and Health , 16 ,

Editorial Reviews

Sorry, no results were found.
Product details

Against this backdrop, I discuss how qualitative researchers have dealt with the question of induction, using a "generic analytic cycle" common to qualitative methods as an illustration. In the last sections, I propose reconsidering the role of theory in qualitative research. I argue for the need to recover a substantial definition of theory in these studies. According to HUME [] , there are two primary ways to validate knowledge: Knowing facts is equivalent to identifying their causes and effects.

However, observing facts, describing them in their manifestation, does not amount to science. There must be a leap from the visible to the invisible, and herein lies induction: The inductive leap allows us, based on singular facts, to create statements about sets of facts and their future behavior.

But what sustains the argument about induction? What permits us to go from a singular fact to a statement about facts in general or future facts? According to HUME [] , induction does not involve a logical base.

The "statement about all" is not contained in the "statement about some. HUME claims that it is merely habit that causes us to think that if the sun rose today, it will do so once again tomorrow. There is therefore a psychological component in this knowledge-building process.

In other words, HUME demonstrated that passing from some to all is an emotionally and imaginatively based process, and that the root of any knowledge is sensory experience. Inductive thinking is problematic because we can never be certain that a recurring known event will continue to occur. The past may not be the best guarantee for current knowledge; otherwise, how can we explain unpredictable events? In the well-known analogy cited by POPPER , the fact that we observe innumerable white swans does not allow us to assume that there will never be a black one.

Another relevant question is distinguishing between empirical generalizations, based on the observation of a recurring number of singular cases, and universal generalizations, in the form of laws. Without resorting to metaphysics, how do we attest to the truth of universal laws, which establish necessary non-accidental connections between events, based on observations of singular cases only QUINE, , p. According to the skeptic HUME, all what we can do is create hypotheses about how things should occur, drawing from our own empirical experiences or habits; we can never determine the ultimate fundamentals of the phenomena.

HUME's position generated intense debate in the philosophy of science. This irrationality is based in HUME's opinion that our beliefs have more weight than rationality does in making up our understanding. They argue that a large number of observations, obtained experimentally over a wide range of circumstances, allow inference from the empirical particular to the theoretical universal.

Knowledge, they assert, can be constructed on the basis of repeated observations, to the point where no observational statements conflict with the law or theory thereby derived, or up to an established saturation point. POPPER diverges from naive inductivism, proposing a redefinition of the role of theory in science. He purports that if there is no logical support to infer a universal law from singular experience, there must be support for the opposite.

That is, we can legitimately allege that a theory is true or false based on singular observational statements. Thus, the order is inverted: There is no observation without theory, since perception itself is influenced by expectations, previous experiences, and accumulated knowledge. At the same time, theoretical assertions without empirical content do not tell us much about the world. Theory must be confirmed or falsified by experience. From this emerges the well-known hypothetical-deductive method.

The empirical world is supposed to determine if such a conclusion is confirmed true or pure speculation. For example, LAKATOS , states that a theory consists of a complex of universal statements embedded in particular research programs , rather than a single statement, like a hypothesis, that can be tested straightforwardly.

This calls into question the value of the falsifiability of discrete hypotheses. Moreover, QUINE , , , proposes that we conceive theories holistically, as a web of interlocked statements, such that concepts can only be defined in terms of other concepts that make up the network and confer meaning on them, as well as relate them to experience. As a result of these criticisms, it is concluded that the value of theories is not restricted to allowing the elaboration of hypotheses to be individually tested; they are essential to explain the phenomena to be investigated.

So, the primary focus of researchers should not be on data, but rather on the phenomenon, which is embedded into a given theoretical web.

In the next section, I present a number of philosophical perspectives on the relationship between theory and empirical data in order to widen the discussion regarding ways of addressing the problem of induction in science in general and qualitative research in particular. One of the most widely prevalent ways of thinking about the theory-data relationship is that the latter verify the former.

This viewpoint is associated with the philosophy of logical positivism, which introduces a distinction between direct observation which is not theory-laden , and theory, whose value depends on the justification allowed by empirical data. Thus, theoretical statements should have empirical content, if they are to be trusted as claims about the world. The truth about a theoretical statement depends on a "correspondence theory" of truth: Positivists vehemently reject any pretense of metaphysical justification for scientific activity, arguing for the impossibility of synthetic propositions, that is, non-contingent statements.

Only analytic propositions for example, logical and mathematical statements can be aprioristically true, since they have no empirical content and therefore say nothing about what really takes place in the world. In their essence, logical positivists were empiricists. However, a difference between them and the classical empiricists of the sixteenth to eighteenth centuries, including HUME, is that the positivists gave a linguistic and logical formulation to their theory of knowledge.

A sentence with meaningfulness is a true sentence, corroborated verified by experience. In its strong version SCHLICK, , the criterion of verifiability assumes the existence of basic propositions that are capable of serving as the basis for the process of empirical observation.

Thus, a statement is only significant true when we can, at least initially, verify it using basic propositions that indicate its meaning—for example, a statement which is caused, as immediately as possible, by perceptive experiences AYER, POPPER was a critic of logical positivism, and introduced a second way of thinking about the theory-empirical data relationship.

From the perspective of the previously mentioned hypothetical-deductive model, it is up to empirical data to falsify a hypotheses developed aprioristically by researchers.

But what does it mean for a hypothesis to be falsifiable? It means that the hypothesis cannot in principle be true in and of itself. A hypothesis results from an exercise of intellect, creative capacity, and consideration of context, since available knowledge offers us concepts, ideas, relationships, etc.

Thus, in principle, as a product of human intellect, any hypothesis can be true, even though it apparently makes no sense. Ultimately, the data tell us if our hypotheses are consistent.

If confirmed, they contribute to human progress; if falsified, they should be substituted for by others. This shows that a theory must be always subject to revision, reconsideration, and improvement.

As mentioned in the previous section, the hypothetic-deductive model was not immune to criticism. In addition to those concerns already cited, another exists, related to the extent of falsification. Considering science from a historical and sociological perspective, several theories that initially seemed to have been falsified, which would indicate that they should be discarded, later proved to be true.

Furthermore, when a hypothesis is falsified, it does not necessarily mean that the entire theory from which it was deduced should be discarded. This seems to show there is something more involved in the relationship between theory and empirical data—for realists, for example, this "something more" is the structure of the world itself WORRALL, , which is represented by the theory, if the latter is to be true.

A third way of portraying the theory-data relationship was proposed by HEMPEL , who developed the deductive-nomological model of scientific explanation, by which it is possible to logically deduce a statement that describes a phenomenon based on laws and on the consideration of background conditions.

When associated with statistical models, for example based on frequency distribution, theories identify or represent repetition and patterns in a particular class of events. They seek order in the world. The three ways of thinking about the relationship between theory and empirical data presented above illustrate a central question in the philosophy of science: Contemporarily, a sound perspective on this issue can be found in the work of authors linked to scientific realism and antirealism.

From a realist perspective, theories must be interpreted literally: There is a reality independent from us, and in order for theories to be scientific, they must tell us the true nature of this reality. This poses several problems for realists.

One, which is of interest here, is the problem of how to explain the existence of two or more empirically successful theories explaining the same phenomenon. It indicates that there is no way to guarantee an essential, definitive connection between theory and any particular facts and properties of the world. The same phenomenon can be legitimately explained in different ways, using distinct theories and theoretical models.

In this sense, the choice of a theory may have nothing to do with the truth or the theory's approximation to the essential facts, but rather with its capacity to help us solve problems of practical interest.

Therefore, the aim of a theory would not be "pegged" to the world, but would be designed to help us represent the world in aspects relevant to a proposed transformation of part of it. According to this pragmatic or antirealist perspective, phenomena are not discovered by science, but constructed by it. This argument depends on the premise that we can never come to know the true nature of the world due to the existence of unobservable entities.

Phenomena themselves can be examples of the unobservable, since their postulation depends on their incorporation into a theoretical web. This reorders the relationship among a number of key concepts: In summary, theories are devices that systematize or organize experience.

They are not only instruments for deducing hypotheses and predictions, but also resources of semiotic mediation; they do not only reflect the world in the mind's eye RORTY, , but re construct it according to our pragmatic interests. However, a strong empiricist culture likely persists in our research activities, sustaining a certain "theoretical allergy" and conceptualizing theory and theories in an excessively restrictive sense.

Does this also apply to qualitative research? To answer this question, I will now discuss the problem of induction and the role of theory in qualitative research. The field of qualitative methods has grown significantly in recent decades, judging from the profusion of journal papers and textbooks on the subject.

As a result of this growth, we have today a complex, diversified field influenced by a large number of schools, authors, and epistemological perspectives. It therefore seems risky to make assertions regarding qualitative methods which are best given in the plural.

Nevertheless, I will attempt to do so in this section. Specifically, I will illustrate what seems to me to be the analytic core of many qualitative data analysis methods: I argue that this analytic cycle exposes the tensions inherent in the process of developing inductive theory from empirical data.

In operational terms, I will denominate the coding and data categorizing process in qualitative research as the "generic analytic cycle. I hold that this allows me to broadly discuss the problem of induction and the role of theory in the qualitative research process—which would be technically more difficult if I had to consider the characteristic analysis cycle of each qualitative research tradition separately.

Next, I will comment on the three large processes of a generic analytic cycle. The process of analyzing qualitative data begins with researchers establishing initial contact with the material in their set by means of a general reading, followed by careful reading and thick description; GEERTZ, of each piece of information—an interview, an image, excerpts from documents. As a result of the previous procedure, it is expected that certain themes and patterns will start to emerge from the data; that is, that they will inductively reveal themselves to the researchers in the data's interaction with the empirical tools as given above.

Another alternative in attempting to discover themes would be to analyze data according to an existing framework, that is, deductively. Thus, when creating codebooks for qualitative analyses, in content analysis for example, researchers can be both inductive allowing themes, patterns, and categories to emerge from the data and deductive relying on previous analytical categories, obtained from a theory of reference or even an interview guide , or both at the same time especially in mixed research designs; CRESWELL, The coding procedure develops as researchers identify themes and patterns in their data.

The coding procedure is complemented by categorization and conceptualization. At this point, the purpose of analysis is to reduce the material even further, at the same time raising its level of abstraction. Classifying or clustering themes or codes into categories allows researchers to organize them and develop conceptualizations about them—that is, explain them.

To achieve this, researchers can contextualize their findings thick description , encompassing a wider picture in which they make sense; compare them to theories and other findings discussed in the relevant and extant literature; compare subgroups, observing whether explanations differ depending on the individuals involved; link and relate categories among themselves in general, following the criterion of grouping them according to similar characteristics ; and use typologies, conceptual models and data matrices.

Researchers can also try to explain outliers, that is, units of empirical material that do not fit into the theory under construction. A fundamental question related to the second large procedure as described above, and one which has a direct impact on the relation between theory and empirical data, is what researchers understand by "theme," "pattern," and "category.

In summary, themes can assume both categorical an instance of the experience, a unit of meaning , and frequential repetition of themes or their location in networks or schemes forms.

Identifying themes is the first transposition from the empirical to the theoretical, an initial inductive leap. This does not occur abruptly, but rather as a process of growing abstraction. Indeed, themes can be, at the onset of analysis, simply codes labels assigned to certain portions of empirical material—for example, to particular parts of an interview, or even to a single sentence, word, or image. Progressively, codes will merge with others, rearrange themselves, and then reflect a more abstract concept or topic, reducing raw data dispersion.

Regardless of the strategy used, the last procedure of qualitative analysis third item in the above list should allow researchers to develop a theory that is not a simple synthesis of observational statements—that is, a description in a broad sense.

Researchers must go beyond induction, and it is at this point that conciliation problems emerge between empiricism and the criteria demanded of a formal scientific explanation. How have qualitative researchers dealt with this problem? The theory-building process is conducted against a growing backdrop of observational data. Initially, via induction, researchers start from observational data, acquired by either experimental or natural designs, making inferences from the latter by an enumerative induction process.

So, theories or general-universal statements are proposed. In qualitative research, as I have pointed out in the beginning of this article, this is very well illustrated by the GTM, which proposes an analytic spiral stemming from data and progressing to the explanation, combining two large vectors: It is, therefore, a two-handed process from description to explanation, always comparing cases and organizing them into increasingly central and abstract thematic categories.

Without this interplay, it would seem difficult to justify the scientific relevance of the qualitative procedure, which would be no more than just another way of cataloging and describing empirical facts without any connection with broader phenomena and theories. However, perhaps not even the interplay between small- or midrange theories that are generated inductively from a set of available empirical data and large-range deductive ones is able to rid qualitative methods of the induction problems discussed in this article.

In the first place, as I have already mentioned, nothing guarantees that discrete empirical data, even when collected in large amounts and under widely varying conditions, can support large-range theories on their own. They may sustain parts of these theories, hypotheses, and questions, but not the theories as a whole, whose development depends on other factors e. Thus, on what basis can it be said that categorizing data from interviews with a determined set of individuals allows researchers to make non-observational therefore, theoretical statements about a phenomenon that is say psychological?

Qualitative researchers can counterargue by stating that the purpose of their work is not to produce generalizations in terms of law-like statements but rather to understand the phenomenon. However, by acting this way, the research in question runs the risk of being purely descriptive and its explanation just an abbreviation for situated empirical observations ROSENBERG, This is not about the number of subjects, which is a sampling problem; it refers to the degree to which empirical data, irrespective of the amount, can support non-observational theoretical statements.

In the second place, when a theory is inductively constructed, one can assume that empirical data are able, in and of themselves, to frame or postulate the phenomenon investigated. As a consequence, the theory-building process can advance "in the dark," since the phenomenon takes shape as the empirical data accumulate.

Next, my perception of reality depends on my previous experience and above all my prior knowledge. Therefore, the choice of which facets, properties, or qualities of a phenomenon will be considered depends on its integration into a theoretical web, in the holistic sense advocated by LAKATOS , and especially by QUINE , Considering this point, I ask the following question: The use of generic analysis methods can be an ad hoc resource.

This last proposition is certainly not alien to qualitative researchers. However, I believe there is a need to re-emphasize this point, which seems to be critical if we are to address the problem of induction in qualitative research.

This assertion will be discussed in the following section. I propose three brief suggestions for addressing the problems outlined in the preceding section regarding how research using what we call "generic" methods in this paper can deal with the problem of induction and theory building. The first suggestion, already alluded to in previous sections, is that qualitative researchers rehabilitate concepts that depend more substantially on a theoretical web, in the sense used by QUINE , This is based on the assumption that concepts acquire meaning in the theoretical context to which they belong.

However, rehabilitating concepts involves reflecting more vigorously on the meaning of the theory used over the entire course of the research, and not only when analyzing empirical data. Obviously, this is not an unfamiliar point to qualitative researchers. Nevertheless, I believe that the debate is far from over. If we take a look in recent textbooks covering qualitative research, we will notice that the authors' focus seems still to fall on the distinction between and combination of induction and deduction in the coding and classification process e.

It seems less common to find a metatheoretical reflection that questions this traditional conception of the knowledge-producing cycle, or attempts to apply qualitative literature to current debates in the philosophy of science. For example, in a historical study aimed at clarifying the concept of theoretical sensitivity and its role in the categorization and theory-building process, GLASER proposes a distinction between two types of codes: Instead, he seems to endorse the distinction between observational statements, on the one hand, and theoretical ones on the other.

I believe that the same problem occurs in other generic qualitative methods. One reason for this can be connected to the implicit concept of theory held by these methods. In this case, theory is thought of as the conceptual component that links empirically grounded thematic categories. Thus, its role seems to be to sustain bonds or mediate between empirical categories and wider theoretical concepts. In other cases, qualitative researchers seem to understand theory in a paradoxically similar way as do logical positivists: Depending on its objectives with respect to empirical verification, qualitative research can be confirmatory or exploratory GUEST et al.

Thus, qualitative research may aim to refine existing theories; confirm or falsify hypotheses derived from current theories ; develop new inductive theories; present counterfactual inferences that is, cases that do not confirm one current theory ; and even make inferences, in the sense of prospective causal explanations.

The second suggestion has to do with the insistence that qualitative researchers, especially novices, consider their research within wider theoretical traditions or theoretical webs , avoiding, as much as possible, general and standard methods as well as a "technist" approach to research.

To that end, they must have at least minimal knowledge of their basic theoretical assumptions. Some common theoretical traditions present in the qualitative research literature are phenomenological, hermeneutical including narrative research , discursive, ethnographic, and also grounded theory.

My third suggestion is that qualitative researchers rethink the role of "emergence" or unexpected facts in qualitative research, as well as the relationship of these facts with the theorizing process e. Throughout this article, I have insisted that investigation of a scientific phenomenon depends on its incorporation into a particular theoretical web.

Moreover, this web is not merely a set of hypotheses from which predictions can be made. If this were so, I would simply be recapitulating the hypothetical-deductive approach in the domain of qualitative methods, saying that theory comes "before" data. Instead, I suggest, based on SCHEIBE , that the dynamic between theory and empirical data involves a reconstruction process, and that the theoretical web is actually a background that guides us, sometimes tacitly POLANYI, , in relation to a phenomenon, its relevant dimensions, and ways to better access it.

The "meeting" between theory and phenomenon can often occur in a casual, unpredictable, and unexpected manner, although always within a scientific and theoretical context.

In this sense, to explain the situation in which the theory-building process results from unexpected events or phenomena, qualitative researchers e. In other words, my comprehensive systems are unable to capture reality in all its complexity. At the same time, this may mean that there is "something more" beyond my symbolic systems, causing them to be continuously subject to revision.

This is the realist position in a broad sense. Currently, a specific version of this position, called critical realism, advocates the existence of an objective reality formed by events and their underlying causes, about the latter of which one can never acquire definitive knowledge. In qualitative research, we observe recent efforts to move closer to this form of realism e. This perspective seeks to position itself in a field challenged by forces such as those represented by empiricism, materialism, idealism, relativism, constructionism, and the like.

Due to the recentness of the embrace of critical realism by qualitative researchers, it is still difficult to predict its impact on the theory-building process, although it is apparently a positive development for the field to incorporate new philosophical perspectives in order to evaluate its own practices.

The purpose of this article was to reflect on the ramifications that the problem of induction poses for qualitative research. Volume 37 of REA features eleven original articles organized in four different sections, each focusing on a specific, popular and significant theme in economic anthropology: Download the call for papers pdf. Recent presidential actions have highlighted, and potentially increased, the vulnerability of millions of people. Many in the USA might lose their health insurance in the near future, and many may find themselves further profiled or regulated into paralysis.

The effects of climate change might eventually impact even more people. A new volume of Research in Economic Anthropology, to be published in , will concentrate on how people face and cope with vulnerability—in a broad sense. Empirical analyses are generally preferred, but novel theoretical papers not grounded in a specific set of original "field" data may also be considered.

In principle, submissions should be under 11, words in length and submitted directly to the editor as MS Word email attachments. An abstract of words is necessary, and all works cited should be included in a references section at the end. Self-identification should be avoided if possible. Publisher Philppa Grand pgrand emeraldinsight. This publication adopts the Emerald Publication Ethics guidelines which fully support the development of, and practical application of consistent ethical standards throughout the scholarly publishing community.

Research in Economic Anthropology Book series search Search in this title:


Main Topics

Privacy Policy

The most cited papers from this title published in the last 3 years. Statistics are updated weekly using participating publisher data sourced exclusively from Crossref.

Privacy FAQs

This bar-code number lets you verify that you're getting exactly the right version or edition of a book. The digit and digit formats both work.

About Our Ads

Research in Economic Anthropology (REA) is the longest-running book series exclusively dedicated to economic anthropology, and enjoys a high reputation as a promoter of “the comparative study – though time and space – of economic systems in their broader sociocultural context”, as editors Dannhaeuser and Werner described their aim in the . Auto Suggestions are available once you type at least 3 letters. Use up arrow (for mozilla firefox browser alt+up arrow) and down arrow (for mozilla firefox browser alt+down arrow) to review and enter to select.

Cookie Info

Research in economic anthropology | Read articles with impact on ResearchGate, the professional network for scientists. The set of journals have been ranked according to their SJR and divided into four equal groups, four quartiles. Q1 (green) comprises the quarter of the journals with the highest values, Q2 (yellow) the second highest values, Q3 (orange) the third highest values and Q4 (red) the lowest values.