The IGNOU MPC-005 Solved Question Paper PDF Download page is designed to help students access high-quality exam resources in one place. Here, you can find ignou solved question paper IGNOU Previous Year Question paper solved PDF that covers all important questions with detailed answers. This page provides IGNOU all Previous year Question Papers in one PDF format, making it easier for students to prepare effectively.
- IGNOU MPC-005 Solved Question Paper in Hindi
- IGNOU MPC-005 Solved Question Paper in English
- IGNOU Previous Year Solved Question Papers (All Courses)
Whether you are looking for IGNOU Previous Year Question paper solved in English or ignou previous year question paper solved in hindi, this page offers both options to suit your learning needs. These solved papers help you understand exam patterns, improve answer writing skills, and boost confidence for upcoming exams.
IGNOU MPC-005 Solved Question Paper PDF

This section provides IGNOU MPC-005 Solved Question Paper PDF in both Hindi and English. These ignou solved question paper IGNOU Previous Year Question paper solved PDF include detailed answers to help you understand exam patterns and improve your preparation. You can also access IGNOU all Previous year Question Papers in one PDF for quick and effective revision before exams.
IGNOU MPC-005 Previous Year Solved Question Paper in Hindi
Q1. विश्वसनीयता को परिभाषित कीजिए। विश्वसनीयता का अनुमान लगाने की विभिन्न विधियों का वर्णन कीजिए। 3+7
Ans.
परिभाषा:
मनोवैज्ञानिक अनुसंधान और परीक्षण में, विश्वसनीयता एक माप की स्थिरता या निरंतरता को संदर्भित करती है। यदि एक परीक्षण या माप उपकरण समान परिस्थितियों में बार-बार उपयोग किए जाने पर समान परिणाम देता है, तो उसे विश्वसनीय माना जाता है। यह इस बात की डिग्री है कि एक माप यादृच्छिक त्रुटि (random error) से कितना मुक्त है। विश्वसनीयता वैधता (validity) के लिए एक आवश्यक शर्त है, लेकिन यह इसकी गारंटी नहीं देती है। एक माप विश्वसनीय हो सकता है लेकिन मान्य नहीं हो सकता है, लेकिन यदि कोई माप विश्वसनीय नहीं है, तो वह मान्य भी नहीं हो सकता है।
विश्वसनीयता का अनुमान लगाने की विधियाँ:
विश्वसनीयता का अनुमान लगाने के लिए कई सांख्यिकीय विधियाँ हैं, जिनमें से प्रमुख निम्नलिखित हैं:
- परीक्षण-पुनर्परीक्षण विश्वसनीयता (Test-Retest Reliability): यह समय के साथ एक माप की स्थिरता का आकलन करता है। इस विधि में, एक ही परीक्षण को एक ही समूह के प्रतिभागियों पर दो अलग-अलग समय बिंदुओं पर प्रशासित किया जाता है। फिर दो सेट के स्कोर के बीच सहसंबंध गुणांक की गणना की जाती है। यदि सहसंबंध उच्च (आमतौर पर +0.80 या अधिक) है, तो परीक्षण को विश्वसनीय माना जाता है। समय का अंतराल बहुत छोटा या बहुत लंबा नहीं होना चाहिए, क्योंकि बहुत छोटा अंतराल स्मृति प्रभाव (memory effects) और बहुत लंबा अंतराल वास्तविक परिवर्तन (genuine change) को जन्म दे सकता है।
- समानांतर-प्रपत्र विश्वसनीयता (Parallel-Forms Reliability): इस विधि में, एक ही सामग्री, कठिनाई स्तर और प्रारूप के दो समकक्ष (equivalent) परीक्षण प्रपत्र विकसित किए जाते हैं। दोनों प्रपत्रों को एक ही समूह को एक के बाद एक प्रशासित किया जाता है। फिर दो प्रपत्रों से प्राप्त अंकों के बीच सहसंबंध की गणना की जाती है। एक उच्च सहसंबंध गुणांक इंगित करता है कि प्रपत्र विश्वसनीय हैं। इस विधि का लाभ यह है कि यह परीक्षण-पुनर्परीक्षण में होने वाले स्मृति प्रभाव को कम करता है, लेकिन दो वास्तव में समानांतर प्रपत्रों का निर्माण करना मुश्किल और समय लेने वाला होता है।
- विभाजित-अर्ध विश्वसनीयता (Split-Half Reliability): यह एक परीक्षण की आंतरिक स्थिरता का आकलन करता है। इस विधि में, एक परीक्षण को एक ही बार प्रशासित किया जाता है, और फिर परीक्षण के आइटम्स को दो हिस्सों में विभाजित किया जाता है (जैसे, सम-विषम आइटम्स)। प्रत्येक आधे के लिए कुल स्कोर की गणना की जाती है, और इन दो स्कोरों के बीच सहसंबंध स्थापित किया जाता है। चूँकि यह सहसंबंध केवल आधे परीक्षण के लिए होता है, इसलिए पूरे परीक्षण की विश्वसनीयता का अनुमान लगाने के लिए स्पीयरमैन-ब्राउन प्रोफेसी फॉर्मूला (Spearman-Brown prophecy formula) का उपयोग करके इसे समायोजित किया जाता है।
- आंतरिक संगति विश्वसनीयता (Internal Consistency Reliability): यह विधि भी माप की आंतरिक स्थिरता का मूल्यांकन करती है, अर्थात यह देखती है कि एक परीक्षण के सभी आइटम्स एक ही अवधारणा (construct) को कितनी अच्छी तरह से मापते हैं। इसके दो सामान्य माप हैं:
- क्रोनबैक का अल्फा (Cronbach’s Alpha): इसका उपयोग तब किया जाता है जब आइटम्स के कई संभावित उत्तर होते हैं (जैसे, लिकर्ट स्केल)। यह अनिवार्य रूप से परीक्षण को विभाजित करने के सभी संभावित तरीकों के लिए औसत सहसंबंध की गणना करता है।
- कुडर-रिचर्डसन फॉर्मूला (Kuder-Richardson Formula – KR-20): इसका उपयोग तब किया जाता है जब आइटम्स के उत्तर द्विभाजी (dichotomous) होते हैं (जैसे, सही/गलत, हाँ/नहीं)।
ये विधियाँ शोधकर्ताओं को यह सुनिश्चित करने में मदद करती हैं कि उनके द्वारा उपयोग किए जा रहे उपकरण सुसंगत और भरोसेमंद परिणाम दे रहे हैं।
Q2. अच्छे शोध डिजाइन के मानदंडों की व्याख्या कीजिए और शोध डिजाइन के प्रकारों पर चर्चा कीजिए। 3+7
Ans.
एक अच्छे शोध डिजाइन के मानदंड:
एक शोध डिजाइन एक अध्ययन के लिए एक तार्किक संरचना या योजना है। एक अच्छा शोध डिजाइन यह सुनिश्चित करता है कि शोध प्रश्न का उत्तर निष्पक्ष, विश्वसनीय और वैध तरीके से दिया जा सके। इसके मुख्य मानदंड निम्नलिखित हैं:
- वस्तुनिष्ठता (Objectivity): अनुसंधान प्रक्रिया और परिणाम व्यक्तिगत पूर्वाग्रहों, भावनाओं या मतों से मुक्त होने चाहिए। प्रक्रियाओं को मानकीकृत किया जाना चाहिए ताकि अन्य शोधकर्ता उन्हें दोहरा सकें।
- विश्वसनीयता (Reliability): जैसा कि पहले बताया गया है, एक शोध डिजाइन को ऐसे परिणाम देने में सक्षम होना चाहिए जो समय के साथ और विभिन्न परिस्थितियों में सुसंगत हों। यदि अध्ययन को दोहराया जाता है, तो परिणाम समान होने चाहिए।
- वैधता (Validity): यह सबसे महत्वपूर्ण मानदंड है। वैधता इस बात को संदर्भित करती है कि एक शोध अध्ययन वास्तव में वही मापता है जो वह मापने का दावा करता है।
- आंतरिक वैधता (Internal Validity): यह इस बात की डिग्री है कि अध्ययन के परिणाम स्वतंत्र चर (independent variable) में हेरफेर के कारण हैं, न कि किसी अन्य बाहरी कारक के कारण।
- बाहरी वैधता (External Validity): यह उस डिग्री को संदर्भित करता है जिस तक अध्ययन के परिणामों को अन्य सेटिंग्स, अन्य लोगों और समय के साथ सामान्यीकृत (generalized) किया जा सकता है।
- सामान्यीकरण (Generalizability): यह बाहरी वैधता से निकटता से संबंधित है। एक अच्छा शोध डिजाइन ऐसे परिणाम उत्पन्न करता है जो अध्ययन में शामिल नमूने से परे एक बड़ी आबादी पर लागू हो सकते हैं।
- नियंत्रण (Control): एक अच्छे डिजाइन में, शोधकर्ता बाहरी या अप्रासंगिक चरों (extraneous variables) के प्रभाव को नियंत्रित करने या समाप्त करने का प्रयास करता है जो परिणामों को प्रभावित कर सकते हैं।
शोध डिजाइन के प्रकार:
शोध डिजाइनों को मोटे तौर पर तीन मुख्य श्रेणियों में वर्गीकृत किया जा सकता है:
- अन्वेषणात्मक अनुसंधान डिजाइन (Exploratory Research Design): इस डिजाइन का उपयोग तब किया जाता है जब कोई समस्या स्पष्ट रूप से परिभाषित नहीं होती है। इसका उद्देश्य समस्या को बेहतर ढंग से समझने, अंतर्दृष्टि विकसित करने और भविष्य के शोध के लिए परिकल्पना तैयार करने के लिए प्रारंभिक शोध करना है। इसमें साहित्य समीक्षा, विशेषज्ञ सर्वेक्षण और केस स्टडी जैसी तकनीकें शामिल हैं।
- वर्णनात्मक अनुसंधान डिजाइन (Descriptive Research Design): इसका उद्देश्य किसी जनसंख्या, स्थिति या घटना की विशेषताओं का सटीक और व्यवस्थित रूप से वर्णन करना है। यह “क्या, कहाँ, कब और कैसे” प्रश्नों का उत्तर देता है, लेकिन “क्यों” का नहीं। इस श्रेणी में शामिल हैं:
- सर्वेक्षण अनुसंधान (Survey Research): प्रश्नावली या साक्षात्कार के माध्यम से एक नमूने से डेटा एकत्र करना।
- केस स्टडी (Case Study): किसी एक व्यक्ति, समूह या घटना का गहन अध्ययन।
- अवलोकन अनुसंधान (Observational Research): प्राकृतिक या नियंत्रित सेटिंग में व्यवहार का अवलोकन करना।
- कारणात्मक (प्रायोगिक) अनुसंधान डिजाइन (Causal (Experimental) Research Design): इस डिजाइन का उद्देश्य कारण-और-प्रभाव संबंधों का निर्धारण करना है। शोधकर्ता एक या अधिक स्वतंत्र चरों में हेरफेर करता है और आश्रित चर पर उनके प्रभाव का निरीक्षण करता है, जबकि अन्य सभी चरों को नियंत्रित रखता है।
- सच्चा प्रायोगिक डिजाइन (True Experimental Design): इसमें चरों का हेरफेर, नियंत्रण समूह और प्रतिभागियों का यादृच्छिक असाइनमेंट शामिल है। यह उच्चतम आंतरिक वैधता प्रदान करता है।
- अर्ध-प्रायोगिक डिजाइन (Quasi-Experimental Design): यह सच्चे प्रयोगों के समान है लेकिन इसमें यादृच्छिक असाइनमेंट की कमी होती है। इसका उपयोग उन स्थितियों में किया जाता है जहां यादृच्छिककरण संभव या नैतिक नहीं है।
- पूर्व-प्रायोगिक डिजाइन (Pre-Experimental Design): इनमें नियंत्रण समूह या पूर्व-परीक्षण की कमी होती है और ये आंतरिक वैधता के लिए कई खतरों के प्रति संवेदनशील होते हैं।
शोध डिजाइन का चुनाव शोध प्रश्न, उपलब्ध संसाधनों और अध्ययन के उद्देश्यों पर निर्भर करता है।
Q3. एक्स-पोस्ट फैक्टो शोध और प्रायोगिक शोध के बीच अंतर स्पष्ट कीजिए। एक्स-पोस्ट फैक्टो शोध के चरणों का वर्णन कीजिए। 3+7
Ans.
एक्स-पोस्ट फैक्टो शोध और प्रायोगिक शोध के बीच अंतर:
एक्स-पोस्ट फैक्टो शोध और प्रायोगिक शोध दोनों ही चरों के बीच संबंधों का अध्ययन करते हैं, लेकिन उनकी कार्यप्रणाली में एक मौलिक अंतर है।
मुख्य अंतर स्वतंत्र चर (Independent Variable – IV) के हेरफेर में निहित है।
- प्रायोगिक शोध (Experimental Research): इसमें, शोधकर्ता सक्रिय रूप से एक या एक से अधिक स्वतंत्र चरों में हेरफेर (manipulation) करता है ताकि आश्रित चर (Dependent Variable – DV) पर उनके प्रभाव का निरीक्षण किया जा सके। उदाहरण के लिए, एक नई शिक्षण विधि (IV) के प्रभाव का अकादमिक प्रदर्शन (DV) पर अध्ययन करने के लिए, शोधकर्ता एक समूह को नई विधि से पढ़ाएगा और दूसरे को पारंपरिक विधि से। इसमें प्रतिभागियों का यादृच्छिक असाइनमेंट (random assignment) भी होता है, जो समूहों की प्रारंभिक समानता सुनिश्चित करता है और आंतरिक वैधता को बढ़ाता है।
- एक्स-पोस्ट फैक्टो शोध (Ex-Post Facto Research): इस प्रकार के शोध में, शोधकर्ता स्वतंत्र चर में हेरफेर नहीं करता है क्योंकि घटना “तथ्य के बाद” (after the fact) पहले ही घटित हो चुकी होती है। शोधकर्ता उन विषयों का चयन करता है जिनके लिए स्वतंत्र चर के विभिन्न स्तर पहले से मौजूद हैं। उदाहरण के लिए, धूम्रपान (IV) के फेफड़ों के स्वास्थ्य (DV) पर प्रभाव का अध्ययन करने के लिए, शोधकर्ता धूम्रपान करने वालों और धूम्रपान न करने वालों के दो समूहों का चयन करेगा और उनके फेफड़ों के स्वास्थ्य की तुलना करेगा। यहाँ धूम्रपान करने या न करने का निर्णय शोधकर्ता द्वारा नहीं, बल्कि प्रतिभागियों द्वारा पहले ही लिया जा चुका है। इसमें यादृच्छिक असाइनमेंट संभव नहीं है, जिससे आंतरिक वैधता कम हो जाती है क्योंकि अन्य कारक (जैसे जीवनशैली, आनुवंशिकी) भी परिणामों को प्रभावित कर सकते हैं।
संक्षेप में, प्रायोगिक अनुसंधान कारण-और-प्रभाव का निष्कर्ष निकालने में अधिक मजबूत है क्योंकि इसमें हेरफेर और नियंत्रण होता है, जबकि एक्स-पोस्ट फैक्टो शोध केवल चरों के बीच संबंध या जुड़ाव का सुझाव दे सकता है। एक्स-पोस्ट फैक्टो शोध के चरण:
एक्स-पोस्ट फैक्टो शोध करने में निम्नलिखित चरण शामिल हैं:
- समस्या को परिभाषित करना: अनुसंधान समस्या की स्पष्ट रूप से पहचान करना। शोधकर्ता उन घटनाओं या स्थितियों की पहचान करता है जिनका वह अध्ययन करना चाहता है और संभावित कारण-और-प्रभाव संबंध की परिकल्पना करता है। (जैसे, क्या प्रारंभिक बचपन की शिक्षा का बाद की सामाजिक सफलता पर प्रभाव पड़ता है?)
- परिकल्पना निर्माण: स्वतंत्र और आश्रित चरों के बीच अपेक्षित संबंधों के बारे में एक स्पष्ट और परीक्षण योग्य परिकल्पना तैयार करना।
- समूहों का चयन: प्रतिभागियों के दो या दो से अधिक समूहों का चयन करना जो स्वतंत्र चर पर भिन्न होते हैं। एक समूह में वह विशेषता या अनुभव होता है जिसका अध्ययन किया जा रहा है (प्रायोगिक समूह), जबकि दूसरे समूह में नहीं होता है (नियंत्रण समूह)। चयन सावधानी से किया जाना चाहिए ताकि समूह अन्य प्रासंगिक चरों पर यथासंभव समान हों।
- डेटा संग्रह: चयनित समूहों से आश्रित चर पर डेटा एकत्र करना। यह सर्वेक्षण, परीक्षण, साक्षात्कार या मौजूदा रिकॉर्ड के माध्यम से किया जा सकता है। शोधकर्ता अप्रासंगिक चरों (extraneous variables) पर भी डेटा एकत्र कर सकता है जिन्हें सांख्यिकीय रूप से नियंत्रित करने की आवश्यकता हो सकती है।
- डेटा का विश्लेषण: समूहों के बीच आश्रित चर पर औसत स्कोर में अंतर का परीक्षण करने के लिए उपयुक्त सांख्यिकीय तकनीकों (जैसे टी-टेस्ट, एनोवा) का उपयोग करना।
- परिणामों की व्याख्या: परिणामों की सावधानीपूर्वक व्याख्या करना। चूँकि इसमें हेरफेर और यादृच्छिककरण की कमी होती है, इसलिए कारण-और-प्रभाव के बारे में सीधे निष्कर्ष निकालना संभव नहीं है। शोधकर्ता को वैकल्पिक स्पष्टीकरणों पर विचार करना चाहिए और यह स्वीकार करना चाहिए कि परिणाम केवल एक संबंध का सुझाव देते हैं, कारण का नहीं।
Q4. उपयुक्त उदाहरणों के साथ फैक्टोरियल डिजाइन में मुख्य प्रभाव और अंतःक्रिया प्रभाव को स्पष्ट कीजिए। 10
Ans. फैक्टोरियल डिजाइन एक प्रकार का प्रायोगिक डिजाइन है जिसमें दो या दो से अधिक स्वतंत्र चर (जिन्हें कारक या फैक्टर कहा जाता है) और उनके अलग-अलग और संयुक्त प्रभाव को एक आश्रित चर पर एक साथ अध्ययन किया जाता है। फैक्टोरियल डिजाइन का मुख्य लाभ यह है कि यह शोधकर्ताओं को मुख्य प्रभावों (main effects) और अंतःक्रिया प्रभावों (interaction effects) दोनों का आकलन करने की अनुमति देता है।
मुख्य प्रभाव (Main Effect):
एक मुख्य प्रभाव एक स्वतंत्र चर का आश्रित चर पर समग्र प्रभाव होता है, जबकि दूसरे स्वतंत्र चर के स्तरों को नजरअंदाज या औसत कर दिया जाता है। यदि एक फैक्टोरियल डिजाइन में दो स्वतंत्र चर (A और B) हैं, तो दो संभावित मुख्य प्रभाव होंगे: एक चर A के लिए और एक चर B के लिए। अंतःक्रिया प्रभाव (Interaction Effect):
एक अंतःक्रिया प्रभाव तब होता है जब एक स्वतंत्र चर का आश्रित चर पर प्रभाव दूसरे स्वतंत्र चर के स्तर के आधार पर बदलता है। दूसरे शब्दों में, चरों का संयुक्त प्रभाव उनके व्यक्तिगत प्रभावों के योग से भिन्न होता है। यह इंगित करता है कि स्वतंत्र चर एक-दूसरे से स्वतंत्र रूप से काम नहीं कर रहे हैं। उदाहरण:
मान लीजिए कि एक शोधकर्ता यह अध्ययन करना चाहता है कि अध्ययन के घंटे (Study Hours) और कैफीन का सेवन (Caffeine Intake) परीक्षा के प्रदर्शन (Exam Performance – आश्रित चर) को कैसे प्रभावित करते हैं। यह एक 2×2 फैक्टोरियल डिजाइन है।
- कारक A (अध्ययन के घंटे): स्तर 1 (1 घंटा) और स्तर 2 (4 घंटे)
- कारक B (कैफीन का सेवन): स्तर 1 (कोई कैफीन नहीं) और स्तर 2 (कैफीन)
इस डिजाइन में चार स्थितियाँ (समूह) होंगी:
- 1 घंटा अध्ययन, कोई कैफीन नहीं
- 1 घंटा अध्ययन, कैफीन के साथ
- 4 घंटे अध्ययन, कोई कैफीन नहीं
- 4 घंटे अध्ययन, कैफीन के साथ
मान लीजिए कि औसत परीक्षा स्कोर (100 में से) निम्नलिखित हैं:
कोई कैफीन नहीं
कैफीन के साथ
पंक्ति औसत (अध्ययन के घंटे का मुख्य प्रभाव)
1 घंटा अध्ययन
50
70
60
4 घंटे अध्ययन
80
82
81
कॉलम औसत (कैफीन का मुख्य प्रभाव)
65
76
विश्लेषण:
- अध्ययन के घंटों का मुख्य प्रभाव: 4 घंटे (औसत 81) अध्ययन करने वाले छात्रों ने 1 घंटा (औसत 60) अध्ययन करने वालों की तुलना में औसतन बेहतर प्रदर्शन किया। इस प्रकार, अध्ययन के घंटों का एक महत्वपूर्ण मुख्य प्रभाव है।
- कैफीन का मुख्य प्रभाव: कैफीन लेने वाले छात्रों (औसत 76) ने कैफीन न लेने वालों (औसत 65) की तुलना में औसतन बेहतर प्रदर्शन किया। इस प्रकार, कैफीन का भी एक महत्वपूर्ण मुख्य प्रभाव है।
- अंतःक्रिया प्रभाव: अब, हमें अंतःक्रिया की जांच करने की आवश्यकता है। ध्यान दें कि कैफीन का प्रभाव अध्ययन के समय के आधार पर कैसे बदलता है।
- 1 घंटे के अध्ययन के लिए, कैफीन ने स्कोर में 20 अंकों (70-50) का बड़ा सुधार किया।
- 4 घंटे के अध्ययन के लिए, कैफीन ने स्कोर में केवल 2 अंकों (82-80) का मामूली सुधार किया।
चूंकि कैफीन का प्रभाव अध्ययन के घंटों के स्तर के आधार पर अलग-अलग है, इसलिए यहां एक
अंतःक्रिया प्रभाव
है। इसका मतलब है कि कैफीन विशेष रूप से तब फायदेमंद होता है जब अध्ययन का समय कम होता है। केवल मुख्य प्रभावों को देखने से यह महत्वपूर्ण जानकारी छिप जाती। यही फैक्टोरियल डिजाइन की शक्ति है।
IGNOU MPC-005 Previous Year Solved Question Paper in English
Q1. Define Reliability. Describe various methods of estimating reliability. 3+7
Ans. Definition: In psychological research and testing, reliability refers to the consistency or stability of a measure. A test or measurement tool is considered reliable if it yields similar results when used repeatedly under the same conditions. It is the degree to which a measurement is free from random error. Reliability is a necessary precondition for validity, but it does not guarantee it. A measure can be reliable but not valid, but if a measure is not reliable, it cannot be valid either.
Methods of Estimating Reliability: There are several statistical methods for estimating the reliability of a measurement instrument. The major methods are as follows:
- Test-Retest Reliability: This assesses the stability of a measure over time. In this method, the same test is administered to the same group of participants at two different points in time. The correlation coefficient is then calculated between the two sets of scores. If the correlation is high (typically +.80 or higher), the test is considered to have good test-retest reliability. The time interval should not be too short (to avoid memory effects) or too long (to allow for genuine change in the trait being measured).
- Parallel-Forms Reliability (or Alternate-Forms Reliability): This method involves developing two equivalent forms of a test that measure the same construct, with similar content, difficulty level, and format. Both forms are administered to the same group, often in immediate succession. The correlation between the scores from the two forms is then calculated. A high correlation coefficient indicates that the forms are reliable. The advantage of this method is that it reduces the memory effects seen in test-retest, but the major challenge is the difficulty and expense of creating two truly parallel forms.
- Split-Half Reliability: This is a measure of internal consistency. In this method, a test is administered once, and then the test items are split into two halves (e.g., odd-numbered items vs. even-numbered items). A total score is calculated for each half, and the correlation between these two scores is computed. Because this correlation pertains to only half the test, it is adjusted using the Spearman-Brown prophecy formula to estimate the reliability of the full-length test.
- Internal Consistency Reliability: This method also evaluates the internal consistency of a measure, i.e., how well all the items on a test measure the same underlying construct. Two common measures are:
- Cronbach’s Alpha: This is used when items have multiple possible responses (e.g., a Likert scale). It essentially calculates the average of all possible split-half correlations for a test. It is the most widely used measure of internal consistency.
- Kuder-Richardson Formula (KR-20): This is a specific case of Cronbach’s Alpha used when the item responses are dichotomous (e.g., right/wrong, yes/no).
These methods help researchers ensure that the instruments they are using are producing consistent and dependable results, which is a cornerstone of scientific measurement.
Q2. Explain the criteria of good research design and discuss the types of research design. 3+7
Ans. Criteria of a Good Research Design: A research design is a logical framework or plan for a study. A good research design ensures that the research question can be answered in an unbiased, reliable, and valid manner. Its main criteria are:
- Objectivity: The research process and findings should be free from the researcher’s personal biases, emotions, or opinions. Procedures should be standardized so that other researchers can replicate them.
- Reliability: As mentioned previously, a research design should be able to yield results that are consistent over time and across different settings. If the study were to be repeated, the results should be similar.
- Validity: This is the most crucial criterion. Validity refers to the extent to which a research study actually measures what it claims to measure.
- Internal Validity: This is the degree of confidence that the results of the study are due to the manipulation of the independent variable and not some other extraneous factor.
- External Validity: This refers to the degree to which the results of the study can be generalized to other settings, other people, and over time.
- Generalizability: Closely related to external validity, a good research design produces findings that can be applied from the sample studied to a larger population.
- Control: In a good design, the researcher attempts to control or eliminate the influence of extraneous or confounding variables that could affect the results, thereby isolating the effect of the independent variable.
Types of Research Design: Research designs can be broadly classified into three main categories:
- Exploratory Research Design: This design is used when a problem is not clearly defined. Its purpose is to conduct preliminary research to better understand the problem, develop insights, and formulate hypotheses for future, more definitive research. Techniques include literature reviews, expert surveys, and case studies. It is flexible and aims to generate ideas.
- Descriptive Research Design: The goal of this design is to accurately and systematically describe a population, situation, or phenomenon. It answers the “what, where, when, and how” questions, but not the “why”. Designs in this category include:
- Survey Research: Collecting data from a sample through questionnaires or interviews.
- Case Study: An in-depth study of a single individual, group, or event.
- Observational Research: Observing behavior systematically in a natural or controlled setting.
- Causal (Experimental) Research Design: This design aims to determine cause-and-effect relationships. The researcher manipulates one or more independent variables and observes their effect on a dependent variable, while holding all other variables constant.
- True Experimental Design: Involves manipulation of variables, a control group, and random assignment of participants. It provides the highest internal validity.
- Quasi-Experimental Design: Similar to true experiments but lacks random assignment. It is used in situations where randomization is not feasible or ethical.
- Pre-Experimental Design: These designs lack a control group or pre-testing and are susceptible to numerous threats to internal validity. They are considered the weakest type of experimental design.
The choice of research design depends on the research question, available resources, and the objectives of the study.
Q3. Differentiate between ex-post facto research and experimental research. Describe the steps in ex-post facto research. 3+7
Ans. Difference between Ex-Post Facto and Experimental Research: Both ex-post facto research and experimental research aim to study relationships between variables, but they differ fundamentally in their methodology.
The key difference lies in the manipulation of the independent variable (IV) .
- Experimental Research: In this, the researcher actively manipulates one or more independent variables to observe their effect on a dependent variable (DV). For example, to study the effect of a new teaching method (IV) on academic performance (DV), a researcher would assign one group to be taught by the new method and another by the traditional method. It also crucially involves random assignment of participants to conditions, which ensures the initial equivalence of groups and enhances internal validity.
- Ex-Post Facto Research: In this type of research, the researcher does not manipulate the independent variable because the event has already occurred “after the fact” . The researcher selects subjects for whom the different levels of the independent variable already exist. For example, to study the effect of smoking (IV) on lung health (DV), a researcher would select a group of smokers and a group of non-smokers and compare their lung health. The decision to smoke or not was made by the participants long before the study. Random assignment is not possible, which lowers internal validity because other factors (e.g., lifestyle, genetics) could also be influencing the results.
In summary, experimental research is stronger for inferring cause-and-effect because it involves manipulation and control, whereas ex-post facto research can only suggest relationships or associations between variables.
Steps in Ex-Post Facto Research: Conducting ex-post facto research involves the following steps:
- Define the Problem: Clearly identifying the research problem. The researcher identifies the phenomenon or condition they want to study and hypothesizes a potential cause-and-effect relationship. (e.g., Does early childhood education have an effect on later social success?)
- Formulate Hypotheses: Stating a clear and testable hypothesis about the expected relationship between the independent and dependent variables.
- Select the Groups: Selecting two or more groups of participants who differ on the independent variable. One group has the characteristic or experience being studied (the “experimental” group), while the other does not (the “control” or comparison group). The selection should be done carefully to ensure the groups are as similar as possible on other relevant variables.
- Data Collection: Collecting data on the dependent variable from the selected groups. This can be done through surveys, tests, interviews, or existing records. The researcher might also collect data on extraneous variables that may need to be statistically controlled.
- Analysis of Data: Using appropriate statistical techniques (e.g., t-tests, ANOVA) to test for a significant difference in the mean scores on the dependent variable between the groups.
- Interpretation of Results: Interpreting the findings cautiously. Because it lacks manipulation and randomization, it is not possible to draw direct cause-and-effect conclusions. The researcher must consider alternative explanations and acknowledge that the results merely suggest a relationship, not causation.
Q4. Elucidate the main effect and interaction effect in factorial design with suitable examples. 10
Ans. A factorial design is a type of experimental design that involves two or more independent variables (called factors), allowing the researcher to study their individual and combined effects on a dependent variable simultaneously. The primary advantage of a factorial design is that it enables researchers to assess both main effects and interaction effects .
Main Effect: A main effect is the overall effect of one independent variable on the dependent variable, ignoring or averaging across the levels of the other independent variables. If a factorial design has two independent variables (A and B), there will be two potential main effects: one for variable A and one for variable B.
Interaction Effect: An interaction effect occurs when the effect of one independent variable on the dependent variable changes depending on the level of another independent variable. In other words, the combined effect of the variables is different from the sum of their individual effects. It indicates that the independent variables are not working independently of each other.
Example: Let’s say a researcher wants to study how Study Hours and Caffeine Intake affect Exam Performance (the dependent variable). This is a 2×2 factorial design.
- Factor A (Study Hours): Level 1 (1 Hour) and Level 2 (4 Hours)
- Factor B (Caffeine Intake): Level 1 (No Caffeine) and Level 2 (Caffeine)
This design yields four conditions (groups):
- 1 Hour study, No Caffeine
- 1 Hour study, with Caffeine
- 4 Hours study, No Caffeine
- 4 Hours study, with Caffeine
Let’s assume the mean exam scores (out of 100) are as follows:
No Caffeine |
Caffeine |
Row Mean (Main Effect of Study Hours) |
|
1 Hour Study |
50 | 70 | 60 |
4 Hours Study |
80 | 82 | 81 |
Column Mean (Main Effect of Caffeine) |
65 |
76 |
Analysis:
- Main Effect of Study Hours: On average, students who studied for 4 hours (mean of 81) performed better than those who studied for 1 hour (mean of 60). Thus, there is a significant main effect of study hours.
- Main Effect of Caffeine: On average, students who had caffeine (mean of 76) performed better than those who did not (mean of 65). Thus, there is also a significant main effect of caffeine.
- Interaction Effect: Now, we need to check for an interaction. Notice how the effect of caffeine changes based on study time.
- For 1 hour of study, caffeine produced a large 20-point improvement in scores (70 – 50).
- For 4 hours of study, caffeine produced only a minor 2-point improvement (82 – 80).
Because the effect of caffeine is
different
depending on the level of study hours, there is an
interaction effect
. This interaction tells us that caffeine is particularly beneficial when study time is low. This crucial insight would be missed by only looking at the main effects. This is the power of the factorial design.
Q5. Explain non-probability sampling methods. 6
Ans. Non-probability sampling is a collection of sampling methods where the selection of units from a population is not based on random chance. Instead, it relies on the subjective judgment of the researcher or other non-random criteria. This means that not every individual in the population has a known, non-zero chance of being selected. While less rigorous than probability sampling for making generalizations to a larger population, these methods are often used in qualitative research, exploratory studies, or when time and resources are limited.
The main types of non-probability sampling methods include:
- Convenience Sampling (or Accidental/Haphazard Sampling): This is the most common and least rigorous method. The researcher selects participants who are most easily accessible or readily available. For example, a student researcher might survey their classmates or people in a shopping mall simply because they are easy to find. This method is quick and inexpensive but is highly prone to bias and its results are not generalizable.
- Purposive Sampling (or Judgmental Sampling): In this method, the researcher uses their own judgment to select participants who they believe are most representative of the population or have specific knowledge or expertise relevant to the research question. For instance, if studying the experiences of expert chess players, a researcher would deliberately seek out and select individuals with a high rating or a long history in the sport.
- Quota Sampling: This is the non-probability equivalent of stratified sampling. The researcher first identifies relevant subgroups (strata) in the population (e.g., by age, gender, ethnicity) and determines the proportion of each subgroup in the population. Then, they set a quota for the number of individuals to be sampled from each subgroup. The final selection of participants within each quota is done by convenience or purposive sampling, not randomly. For example, if a population is 55% female and 45% male, a sample of 100 would aim to include 55 women and 45 men.
- Snowball Sampling (or Chain-Referral Sampling): This method is used when the target population is hard to find or access, such as individuals with a rare disease, homeless people, or members of a specific subculture. The researcher starts by identifying and interviewing a few initial participants. These participants are then asked to refer other people they know who fit the study criteria. The sample “snowballs” from a small initial group.
Q6. Define survey research and explain the different types of survey research. 2+4
Ans. Definition of Survey Research: Survey research is a quantitative research method used to collect information from a sample of individuals through their responses to a set of questions. The data is typically gathered via questionnaires or interviews. The primary purpose of survey research is to describe the characteristics, attitudes, beliefs, or behaviors of a large population by studying a smaller, representative sample of that population. It is one of the most common methods in social sciences for collecting self-report data.
Different Types of Survey Research: Surveys can be classified based on their time dimension or their method of administration. Based on the time dimension, the main types are:
- Cross-Sectional Surveys: This is the most common type of survey. Data is collected from a sample at a single point in time . It provides a “snapshot” of the population’s opinions, attitudes, or behaviors at that specific moment. For example, a pre-election poll that asks a sample of voters about their voting intentions is a cross-sectional survey. They are relatively quick and inexpensive to conduct but cannot be used to analyze changes over time or establish causal relationships.
- Longitudinal Surveys: In longitudinal surveys, data is collected from the same population at multiple points in time . This allows researchers to study changes and trends over a period. There are three main types of longitudinal surveys:
- Trend Surveys: These surveys examine changes in a general population over time. Different samples of people are drawn from the same population (e.g., university freshmen) at different times. For example, surveying a new sample of freshmen each year to track changes in their attitudes towards technology.
- Cohort Surveys: These focus on a specific group of people (a cohort) who share a common characteristic or experience, such as a birth year or graduation year. Researchers survey different samples from this specific cohort over time. For example, tracking the career paths of the “Class of 2020” by surveying a different sample from this class every five years.
- Panel Surveys: These surveys collect data from the exact same sample of individuals (the panel) at multiple time points. This is the most powerful type for studying individual change, as it can track the development of specific people’s attitudes or behaviors. However, they are expensive, time-consuming, and suffer from issues like attrition (panel members dropping out).
Surveys can also be classified by administration method, such as mail surveys, telephone surveys, online surveys, and face-to-face interviews, each with its own advantages and disadvantages regarding cost, response rate, and data quality.
Q7. Define mixed factorial design with an example. Discuss interrupted Time Series Design. 3+3
Ans. Mixed Factorial Design: A mixed factorial design (also known as a mixed-design ANOVA or split-plot design) is a type of experimental design that includes at least one between-subjects independent variable and at least one within-subjects independent variable.
- A between-subjects variable is one where different groups of participants are assigned to different levels of the variable (e.g., a control group vs. an experimental group).
- A within-subjects variable is one where the same group of participants is exposed to all levels of the variable (e.g., measurements taken at pre-test, mid-test, and post-test).
This design is “mixed” because it combines both types of independent variables, allowing for the examination of their main effects and interaction effects.
Example: A researcher wants to test the effectiveness of a new cognitive-behavioral therapy (CBT) for anxiety compared to a standard talk therapy.
- Between-Subjects Variable: Type of Therapy (with two levels: CBT vs. Talk Therapy). Participants are randomly assigned to one of these two groups.
- Within-Subjects Variable: Time (with three levels: Pre-treatment, Post-treatment, 6-month Follow-up). All participants in both groups have their anxiety levels measured at these three time points.
In this 2×3 mixed factorial design, the researcher can determine:
- The main effect of therapy type (did one therapy work better overall?).
- The main effect of time (did anxiety levels change over time for everyone?).
- The interaction effect (did the change in anxiety over time depend on which therapy the participant received?). For example, the CBT group might show a much larger decrease in anxiety from pre- to post-treatment than the talk therapy group.
Interrupted Time-Series Design: An interrupted time-series design is a quasi-experimental design used to evaluate the effect of an intervention or event. It involves taking a series of periodic measurements on a dependent variable for a group or individual both before and after the intervention is introduced. The “interruption” in the series of measurements is the intervention itself.
The core logic is to establish a baseline trend in the data before the intervention. If the intervention has an effect, there should be a clear change (an “interruption”) in the data pattern immediately after the intervention is implemented. This change could be in the level (a sudden jump or drop) or the slope (a change in the rate of increase or decrease) of the time series.
For example, to evaluate the impact of a new city-wide smoking ban, a researcher could collect data on the number of hospital admissions for respiratory illnesses for 24 months before the ban was enacted and for 24 months after. By plotting this data, the researcher can visually and statistically determine if there was a significant drop in admissions immediately following the implementation of the ban. This design is stronger than a simple pre-test/post-test design because the multiple measurements help to rule out threats to internal validity like maturation or regression to the mean.
Q8. Elucidate the assumptions and steps in ethnography. 6
Ans. Ethnography is a qualitative research method that involves the systematic study of people in their own environment to understand their culture, social interactions, and perspectives from an “insider’s point of view.” It originated in anthropology and involves deep immersion and participation in the community being studied.
Assumptions of Ethnography: Ethnographic research is guided by several core assumptions:
- Holism: Cultures are complex, interconnected systems. To understand any part of a culture (e.g., a ritual, a social practice), it must be seen in the context of the whole.
- Cultural Relativism: The researcher aims to understand a group’s beliefs and behaviors from their own perspective, rather than judging them by the standards of the researcher’s own culture.
- Emic Perspective: The primary goal is to understand the world from the participants’ (or “insiders'”) point of view. This is contrasted with the etic (outsider’s) perspective.
- Social Construction of Reality: Meanings are not inherent but are created, shared, and maintained through social interaction. Ethnography seeks to uncover these shared meanings.
- Researcher as Instrument: The researcher is the primary tool for data collection and analysis. Their observations, interactions, and interpretations are central to the process.
Steps in Ethnography: While the process is often iterative and flexible, a typical ethnographic study involves the following steps:
- Formulating a Research Question: Starting with a broad interest in a particular social group, culture, or setting, and refining the research question as the study progresses.
- Site and Participant Selection: Choosing a specific location or community (the “field”) and the people to be studied. This is often done using purposive sampling.
- Gaining Entry and Building Rapport: This is a critical step where the researcher must negotiate access to the community and build trust with its members (the “gatekeepers” and participants). This involves being transparent about the research goals and ensuring ethical conduct.
- Data Collection (Fieldwork): This is the prolonged immersion phase. The primary methods include:
- Participant Observation: The researcher participates in the daily life of the community while also systematically observing and recording events, behaviors, and conversations.
- In-depth Interviews: Conducting formal and informal interviews with key informants to gain deeper insights into their experiences and perspectives.
- Document Analysis: Collecting and analyzing artifacts, documents, photographs, and other materials relevant to the culture.
- Data Analysis: Analysis is an ongoing process that begins in the field. The researcher writes detailed field notes, transcribes interviews, and begins to identify patterns, themes, and categories in the data. This is an inductive process of building understanding from the ground up.
- Writing the Ethnography: The final step is to produce a detailed, descriptive written account (an ethnography) that provides a “thick description” of the culture. This narrative integrates observations, participant quotes, and the researcher’s analysis to convey the lived reality of the group studied.
Q9. Explain the steps in discourse analysis and discuss its relevance. 6
Ans. Discourse analysis is a qualitative research approach that involves the critical study of language in use, whether spoken, written, or signed. It goes beyond the literal meaning of words to examine how language is used in social contexts to construct meaning, perform social actions, and maintain power structures. It treats language not as a neutral medium for communication, but as a form of social practice.
Steps in Discourse Analysis: While specific approaches to discourse analysis can vary (e.g., Critical Discourse Analysis, Foucauldian Discourse Analysis), a general process involves the following steps:
- Define the Research Question and Select Data: The first step is to formulate a clear research question focused on a social issue or phenomenon. Based on this question, the researcher selects the relevant “discourse” to be analyzed. This could be a collection of political speeches, newspaper articles, therapy session transcripts, online forum discussions, or advertisements.
- Data Preparation and Transcription: The selected texts or recordings are prepared for analysis. If the data is spoken, it must be transcribed in detail, often including not just words but also pauses, intonations, and non-verbal cues.
- Coding and Identifying Patterns: The researcher systematically reads through the data to identify patterns, themes, rhetorical devices, metaphors, and specific linguistic features. This coding process helps to organize the data and highlight key aspects of the discourse. The focus is on how things are said, not just what is said.
- Analyzing the Discourse in Context: This is the core of the analysis. The identified patterns are interpreted within their broader social, historical, and political context. The analyst asks questions like:
- What social action is being performed with this language? (e.g., justifying, blaming, persuading)
- Whose interests are being served? Who is included or excluded?
- What assumptions or ideologies are being promoted or challenged?
- How are power relations being constructed or maintained?
- Interpretation and Reporting Findings: The final step involves synthesizing the analysis to answer the research question. The findings are presented, supported by specific examples from the data, to demonstrate how language works to create and shape social reality.
Relevance of Discourse Analysis: Discourse analysis is highly relevant across many fields, including psychology, sociology, political science, and media studies. Its key contributions are:
- It reveals the hidden ideologies, assumptions, and power dynamics embedded in everyday language.
- It helps us understand how social categories like gender, race, and illness are constructed and maintained through language.
- In psychology, it can be used to analyze therapeutic conversations to understand how problems and solutions are co-constructed by therapist and client.
- It provides a powerful tool for critically examining media messages, political rhetoric, and institutional documents to uncover how they influence public opinion and social policy.
Q10. Objectivity and safeguards in research process. 3
Ans. Objectivity in research refers to the principle of ensuring that the inquiry is free from the biases, personal beliefs, values, and emotions of the researcher. It is the striving for a neutral and impartial approach in all stages of the research process, from designing the study and collecting data to analyzing and interpreting the results. The goal is to produce findings that reflect the reality of the phenomenon being studied, rather than the researcher’s preconceived notions.
To maintain objectivity, researchers employ several safeguards :
- Standardized Procedures: Using clear, precise, and consistent procedures for data collection and measurement for all participants ensures that variations are not due to inconsistencies in how the study was conducted.
- Operational Definitions: Clearly defining variables in terms of the specific, observable, and measurable operations used to assess them. This reduces ambiguity and personal interpretation.
- Blinding Procedures: In a single-blind study, participants are unaware of the condition they are in. In a double-blind study, neither the participants nor the researchers interacting with them know who is in the experimental or control group. This minimizes experimenter and participant bias (e.g., placebo effect).
- Peer Review: Before publication, research is critically evaluated by other experts in the field. This process helps to identify potential biases, methodological flaws, and errors in interpretation.
- Replication: Repeating a study, often by different researchers in different settings, to see if the same results are obtained. If findings are replicable, confidence in their objectivity and validity increases.
Q11. Field experiment. 3
Ans. A field experiment is a type of experiment that is conducted in a real-world, natural setting rather than in a controlled laboratory environment. Like a lab experiment, it involves the manipulation of one or more independent variables by the researcher to observe the effect on a dependent variable. However, because it takes place in a natural environment (e.g., a school, a hospital, a public park), participants are often unaware that they are part of a study.
Key Characteristics:
- Natural Setting: The study is conducted in the participants’ everyday environment.
- Manipulation of IV: The researcher actively manipulates the independent variable.
- Lower Participant Awareness: Participants may not know they are being observed, which reduces reactivity and demand characteristics.
Advantages: The main advantage of a field experiment is its high external validity (or ecological validity). Because the research is conducted in a realistic setting, the findings are more likely to be generalizable to real-life situations.
Disadvantages: The primary disadvantage is lower internal validity compared to a lab experiment. In a natural setting, it is much more difficult for the researcher to control for extraneous and confounding variables, making it harder to establish a clear cause-and-effect relationship. There can also be ethical concerns related to informed consent if participants are unaware they are in an experiment.
Example: A classic field experiment studied the bystander effect by staging an emergency (e.g., someone collapsing) on a subway train and observing whether the number of other passengers present influenced the likelihood of someone offering help.
Q12. Relevance of Grounded theory. 3
Ans. Grounded theory is a systematic qualitative research methodology where the primary goal is to generate or discover a theory that is “grounded” in data that has been systematically collected and analyzed. Developed by sociologists Barney Glaser and Anselm Strauss, it is an inductive approach, meaning it starts with data collection rather than with a pre-existing theoretical framework or hypothesis. The theory emerges from the data itself.
The relevance of grounded theory is significant, particularly in psychology and other social sciences, for several reasons:
- Theory Development for Unexplored Areas: It is exceptionally useful for studying social processes, experiences, and phenomena for which little or no theory exists. It allows researchers to build theories from the ground up, based on the lived realities of participants.
- Closeness to Data: Because the theory is derived directly and systematically from the data, it has a strong empirical grounding. This ensures that the resulting theoretical explanations are relevant to and reflective of the context and participants being studied, rather than being an abstract model imposed by the researcher.
- Capturing Complexity and Process: Grounded theory is particularly well-suited for capturing complex social processes and how they change over time. The methodology’s focus on action and process (e.g., through techniques like constant comparison and theoretical sampling) helps to produce dynamic, rather than static, explanations.
- Practical Application: The theories generated through this method are often highly relevant to practice because they originate from real-world problems and settings. For example, a grounded theory study on patient recovery could yield a model that nurses and doctors can use to better support their patients’ healing processes.
In essence, the relevance of grounded theory lies in its ability to produce rich, contextualized, and empirically-based theories that provide deep insights into the social world.
Download IGNOU previous Year Question paper download PDFs for MPC-005 to improve your preparation. These ignou solved question paper IGNOU Previous Year Question paper solved PDF in Hindi and English help you understand the exam pattern and score better.
Thanks!
Leave a Reply