Data collection forms a crucial part of academic inquiry, because the credibility, validity and quality of research outcome will depend on data collected. Our study used systematic literature review to critically examine how weight loss via diet influences the control of glucose measured by Glycated hemoglobin (HbA1c) levels taken in six-month intervals in patients with type 2 diabetes.
So far, we had had identified 26 literatures, comprising of 23 articles published in scholarly journals and three articles authored by professional and intergovernmental organizations (WHO, AADE and American Diabetes Association). The latter category articles were primarily used to offer background to the study, such as on prevalence and demographic dynamics of type 2 diabetes. We opted not to use them to directly address the questions, because their contents are not usually subject to rigorous peer reviews. The journal articles, however, were peer reviewed so had more credible and reputable content. The 26 journal articles were randomly collected from academic databases. To locate them, we various used search words drawn from the study variables such as weight loss control glucose Glycated hemoglobin (HbA1c) l and type 2 diabetes. The search words were used alongside operators and words like AND, VIA so as to generate the most relevant data. We limited our selection to journal articles published in the past five years, to ensure we got up to date data.
Reflecting back, many people may want to view systematic review as an easier data collection method. This, however, is mistaken. A scholar settling for literature review bears a weighty obligation of settling clear the inclusion and exclusion criteria to ensure valid, credible and current data. In our case, we included publication in the past five years and relevance to our research questions and target groups as the primary inclusion criteria. However, we ended up with limited results. From this, we appreciated that data availability can be a hectic challenge, and one may end up with no data all together. We do agree with Moher et al.s (2015) arguments that the method be best employed where researchers seeks to build upon hypothesis as opposed to testing new hypothesis.
We wish to pose some questions on a few contentious issues. First, in relation to time limit as a criteria for inclusion: what should be the test of recentness for scholars seeks recent data for a health topic such as effect of weight loss on HbA1c? And does violation of this period be considered a fundamental methodological mistake? In our case we settled for a five year limit, and many useful materials published in 2011 or 2010 ended up being not used. There is also a question about sampling. In primary research, the sample size, N, is measured by number of participants. Now, since materials replace humans: would it be conceptually sound to consider the articles as the sample, so that N is to be ascertained by counting number of article?
Moher, D., Shamseer, L., Clarke, M., Ghersi, D., Liberati, A., Petticrew, M., ... & Stewart, L. A. (2015). Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Systematic reviews, 4(1), 1.
If you are the original author of this essay and no longer wish to have it published on the customtermpaperwriting.org website, please click below to request its removal: