The IGNOU BCS-042 Solved Question Paper PDF Download page is designed to help students access high-quality exam resources in one place. Here, you can find ignou solved question paper IGNOU Previous Year Question paper solved PDF that covers all important questions with detailed answers. This page provides IGNOU all Previous year Question Papers in one PDF format, making it easier for students to prepare effectively.
- IGNOU BCS-042 Solved Question Paper in Hindi
- IGNOU BCS-042 Solved Question Paper in English
- IGNOU Previous Year Solved Question Papers (All Courses)
Whether you are looking for IGNOU Previous Year Question paper solved in English or ignou previous year question paper solved in hindi, this page offers both options to suit your learning needs. These solved papers help you understand exam patterns, improve answer writing skills, and boost confidence for upcoming exams.
IGNOU BCS-042 Solved Question Paper PDF

This section provides IGNOU BCS-042 Solved Question Paper PDF in both Hindi and English. These ignou solved question paper IGNOU Previous Year Question paper solved PDF include detailed answers to help you understand exam patterns and improve your preparation. You can also access IGNOU all Previous year Question Papers in one PDF for quick and effective revision before exams.
IGNOU BCS-042 Previous Year Solved Question Paper in Hindi
प्रश्न 1. (a) एल्गोरिथम विश्लेषण के संदर्भ में स्पेस कॉम्प्लेक्सिटी की अवधारणा को समझाइए। यह टाइम कॉम्प्लेक्सिटी से किस प्रकार भिन्न है? स्पेस कॉम्प्लेक्सिटी की गणना को स्पष्ट करने के लिए एक उदाहरण प्रदान कीजिए। (5) (b) विशिष्ट पूर्णांकों की एक सूची को देखते हुए, सूची में एक पूर्णांक की स्थिति निर्धारित करने के लिए लीनियर सर्च का उपयोग करके एक एल्गोरिथम लिखें और आवश्यक तुलना संचालन की संख्या की गणना करें। (5) (c) निम्नलिखित में से किन्हीं तीन का वर्णन कीजिए: (3×2=6) () ग्रीडी तकनीक (ii) एक डायरेक्टेड ग्राफ में साइकिल (iii) अपर बाउंड (iv) ब्रांच एंड बाउंड (d) निम्नलिखित के लिए टाइट एसिम्प्टोटिक बाउंड देने के लिए मास्टर प्रमेय का उपयोग करें: (2×2=4) (i) T(n) = 4T(n/2) + n^2 (ii) T(n) = 2T(n/3) + n√n
उत्तर.
(a) स्पेस कॉम्प्लेक्सिटी (Space Complexity)
एल्गोरिथम विश्लेषण में, स्पेस कॉम्प्लेक्सिटी एक एल्गोरिथम द्वारा इनपुट के आकार के एक फ़ंक्शन के रूप में उपयोग की जाने वाली मेमोरी स्पेस की कुल मात्रा को संदर्भित करती है। यह मापता है कि एक एल्गोरिथम को निष्पादन पूरा करने के लिए कितनी मेमोरी या स्टोरेज की आवश्यकता है।
इसमें दो प्रकार की स्पेस शामिल है:
- फिक्स्ड स्पेस (Auxiliary Space): यह एल्गोरिथम द्वारा उपयोग की जाने वाली अतिरिक्त या अस्थायी स्पेस है, जिसमें वेरिएबल्स, पॉइंटर्स आदि के लिए उपयोग की जाने वाली मेमोरी शामिल है। यह इनपुट के आकार पर निर्भर नहीं करती है।
- वैरिएबल स्पेस: यह स्पेस इनपुट के आकार के साथ बदलती है। इसमें डायनामिक मेमोरी एलोकेशन, रिकर्सन स्टैक स्पेस आदि शामिल हैं।
टाइम कॉम्प्लेक्सिटी से भिन्नता:
मुख्य अंतर यह है कि टाइम कॉम्प्लेक्सिटी एक एल्गोरिथम को चलाने में लगने वाले समय को मापती है (ऑपरेशनों की संख्या के संदर्भ में), जबकि स्पेस कॉम्प्लेक्सिटी उस एल्गोरिथम को चलाने के लिए आवश्यक मेमोरी स्पेस को मापती है। एक एल्गोरिथम समय में कुशल हो सकता है लेकिन स्पेस में अकुशल, या इसके विपरीत। आदर्श रूप से, एक एल्गोरिथम दोनों में कुशल होना चाहिए।
उदाहरण: एक ऐरे के तत्वों का योग करने के लिए एक फ़ंक्शन पर विचार करें।
function sum(arr, n) { let total = 0; // 1 यूनिट स्पेस for (let i = 0; i < n; i++) { // ‘i’ के लिए 1 यूनिट स्पेस total += arr[i]; } return total; }
यहां, इनपुट ऐरे `arr` को `n` स्पेस की आवश्यकता होती है। फ़ंक्शन के अंदर, `total` और `i` जैसे वेरिएबल्स को निरंतर स्पेस (constant space) की आवश्यकता होती है। इसलिए, स्पेस कॉम्प्लेक्सिटी S(n) = (इनपुट स्पेस) + (ऑक्ज़ीलियरी स्पेस) = O(n) + O(1) = O(n) है। यदि हम केवल ऑक्ज़ीलियरी स्पेस पर विचार करते हैं, तो यह O(1) है। (b) लीनियर सर्च एल्गोरिथम और तुलना गणना
एल्गोरिथम: LinearSearch(List A, value x)
- `n` को सूची A में तत्वों की संख्या के रूप में सेट करें।
- `comparisons` को 0 पर इनिशियलाइज़ करें।
- `i` = 0 से `n-1` तक लूप करें।
- `comparisons` को 1 से बढ़ाएँ।
- यदि `A[i]` बराबर `x` है, तो:
- `i` (स्थिति) और `comparisons` की संख्या लौटाएँ।
- यदि लूप समाप्त हो जाता है और `x` नहीं मिलता है, तो -1 (या ‘नॉट फाउंड’) और `comparisons` की संख्या लौटाएँ।
तुलना संचालन की गणना:
लीनियर सर्च में, प्रत्येक तत्व की तुलना लक्ष्य मान से की जाती है।
- सर्वश्रेष्ठ स्थिति (Best Case): यदि वांछित तत्व सूची में पहली स्थिति में है, तो केवल 1 तुलना की आवश्यकता होगी।
- औसत स्थिति (Average Case): औसतन, यदि तत्व सूची में मौजूद है, तो लगभग n/2 तुलनाओं की आवश्यकता होगी।
- सबसे खराब स्थिति (Worst Case): यदि तत्व सूची के अंत में है या सूची में मौजूद नहीं है, तो सभी `n` तत्वों के साथ तुलना की जाएगी। इसलिए, n तुलनाओं की आवश्यकता होगी।
उदाहरण के लिए, सूची {10, 50, 30, 70} में 30 को खोजने के लिए, एल्गोरिथम 10, 50, और फिर 30 की तुलना करेगा। यह 3 तुलनाओं के बाद स्थिति 2 पर मिल जाएगा। (c) किन्हीं तीन का वर्णन
(i) ग्रीडी तकनीक (Greedy Technique)
ग्रीडी तकनीक एक एल्गोरिथम प्रतिमान है जो प्रत्येक चरण में स्थानीय रूप से इष्टतम विकल्प बनाकर समस्याओं को हल करती है इस उम्मीद में कि ये स्थानीय इष्टतम विकल्प विश्व स्तर पर इष्टतम समाधान की ओर ले जाएंगे। एक ग्रीडी एल्गोरिथम किसी समस्या के सभी संभावित समाधानों पर विचार नहीं करता है; इसके बजाय, यह एक समय में एक विकल्प बनाता है जो उस समय सबसे अच्छा लगता है। यह हमेशा एक विश्व स्तर पर इष्टतम समाधान की गारंटी नहीं देता है, लेकिन कुछ समस्याओं के लिए, जैसे कि क्रुस्कल और प्रिम के एल्गोरिथम (न्यूनतम स्पैनिंग ट्री) और डाइकस्ट्रा का एल्गोरिथम (एकल स्रोत सबसे छोटा पथ), यह करता है। (ii) एक डायरेक्टेड ग्राफ में साइकिल (Cycle in a Directed Graph)
एक डायरेक्टेड ग्राफ (digraph) में एक साइकिल एक पथ है जो एक ही वर्टेक्स पर शुरू और समाप्त होता है। औपचारिक रूप से, यह वर्टेक्स का एक गैर-खाली अनुक्रम v 0 , v 1 , …, v k-1 , v k है जहाँ v 0 = v k और प्रत्येक i के लिए 0 से k-1 तक v i से v i+1 तक एक डायरेक्टेड एज होता है। साइकिल का पता लगाना कई अनुप्रयोगों में महत्वपूर्ण है, जैसे कि ग्राफ में टोपोलॉजिकल सॉर्टिंग (एक साइक्लिक ग्राफ में संभव नहीं), और ऑपरेटिंग सिस्टम में डेडलॉक का पता लगाना। साइकिल का पता लगाने के लिए एक सामान्य विधि डेप्थ-फर्स्ट सर्च (DFS) का उपयोग करना है। DFS के दौरान, यदि हम एक ऐसे वर्टेक्स पर जाते हैं जो वर्तमान रिकर्सन स्टैक में पहले से ही मौजूद है, तो एक साइकिल का पता चलता है। (iv) ब्रांच एंड बाउंड (Branch and Bound)
ब्रांच एंड बाउंड एक एल्गोरिथम डिजाइन प्रतिमान है जिसका उपयोग आमतौर पर कॉम्बिनेटोरियल ऑप्टिमाइज़ेशन समस्याओं के लिए किया जाता है। यह एक स्टेट स्पेस सर्च तकनीक है। विधि में दो मुख्य प्रक्रियाएँ होती हैं:
- ब्रांचिंग (Branching): समस्या को छोटे-छोटे सब-प्रॉब्लम्स में विभाजित करना। यह एक स्टेट स्पेस ट्री बनाता है।
- बाउंडिंग (Bounding): प्रत्येक सब-प्रॉब्लम (ट्री में नोड) के लिए, हम उस सब-प्रॉब्लम के भीतर प्राप्त किए जा सकने वाले सर्वोत्तम संभव समाधान पर एक बाउंड (अपर या लोअर) की गणना करते हैं। यदि यह बाउंड अब तक पाए गए सर्वोत्तम समाधान से भी बदतर है, तो उस पूरे सब-ट्री को और एक्सप्लोर करने की आवश्यकता नहीं है, और हम उस ब्रांच को “प्रून” (छाँट) कर सकते हैं।
यह व्यवस्थित रूप से समाधान की खोज करता है और प्रूनिंग के माध्यम से खोज स्थान को महत्वपूर्ण रूप से कम करता है। ट्रैवलिंग सेल्समैन प्रॉब्लम (TSP) जैसी समस्याओं को हल करने के लिए इसका उपयोग किया जाता है। (d) मास्टर प्रमेय का उपयोग
मास्टर प्रमेय का उपयोग T(n) = aT(n/b) + f(n) के रूप में पुनरावृत्ति संबंधों के लिए एसिम्प्टोटिक बाउंड प्रदान करने के लिए किया जाता है। हम n log b a की तुलना f(n) से करते हैं। (i) T(n) = 4T(n/2) + n 2
- यहाँ, a = 4, b = 2, और f(n) = n 2 ।
- n log b a = n log 2 4 = n 2 की गणना करें।
- चूंकि f(n) = n 2 = Θ(n log b a ), हम मास्टर प्रमेय के केस 2 को लागू करते हैं।
- इसलिए, T(n) = Θ(n log b a log n) = Θ(n 2 log n) ।
(ii) T(n) = 2T(n/3) + n√n
- यहाँ, a = 2, b = 3, और f(n) = n√n = n 1.5 ।
- n log b a = n log 3 2 ≈ n 0.63 की गणना करें।
- चूंकि f(n) = n 1.5 , f(n) = Ω(n log 3 2 + ε ) कुछ ε > 0 के लिए (उदाहरण के लिए, ε = 0.87)। यह केस 3 की ओर इशारा करता है।
- हमें regularity condition की जांच करनी होगी: a f(n/b) ≤ c f(n) कुछ c < 1 के लिए। 2 f(n/3) = 2 (n/3) 1.5 = 2 n 1.5 / 3 1.5 = (2 / 3√3) n 1.5 . चूंकि c = 2 / (3√3) ≈ 0.385, जो 1 से कम है, शर्त पूरी होती है।
- इसलिए, T(n) = Θ(f(n)) = Θ(n√n) ।
IGNOU BCS-042 Previous Year Solved Question Paper in English
Q1. (a) Explain the concept of the space complexity in context of algorithm analysis. How is it different from time complexity ? Provide an example to illustrate the calculation of space complexity. (5) (b) Given a list of distinct integers, write an algorithm to determine the position of an integer in the list using linear search and count the number of comparison operations required. (5) (c) Describe any three of the following : (3×2=6) () Greedy technique (ii) Cycle in a Directed Graph (iii) Upper Bound (iv) Branch and Bound (d) Use Master Theorem to give tight asymptotic bounds of the following : (2×2=4) (i) T(n)= 4T(n/2) + n^2 (ii) T(n) = 2T(n/3) + n√n
Ans.
(a) Space Complexity
In algorithm analysis, space complexity refers to the total amount of memory space used by an algorithm as a function of the size of the input. It measures how much memory or storage an algorithm needs to complete its execution.
It consists of two types of space:
- Fixed Space (Auxiliary Space): This is the extra or temporary space used by the algorithm, which includes memory used for variables, pointers, etc. It does not depend on the size of the input.
- Variable Space: This space varies with the size of the input. It includes dynamic memory allocation, recursion stack space, etc.
Difference from Time Complexity:
The key difference is that time complexity measures the time it takes for an algorithm to run (in terms of the number of operations), whereas space complexity measures the memory space required to run that algorithm. An algorithm may be efficient in time but inefficient in space, or vice-versa. Ideally, an algorithm is efficient in both.
Example: Consider a function to sum the elements of an array.
function sum(arr, n) { let total = 0; // 1 unit of space for (let i = 0; i < n; i++) { // 1 unit for 'i' total += arr[i]; } return total;}
Here, the input array `arr` requires `n` space. Inside the function, variables like `total` and `i` require constant space. Therefore, the space complexity S(n) = (Input Space) + (Auxiliary Space) = O(n) + O(1) = O(n). If we only consider the auxiliary space, it is O(1).
(b) Linear Search Algorithm and Comparison Count
Algorithm: LinearSearch(List A, value x)
- Set `n` as the number of elements in list A.
- Initialize `comparisons` to 0.
- Loop for `i` = 0 to `n-1`.
- Increment `comparisons` by 1.
- If `A[i]` is equal to `x`, then:
- Return `i` (the position) and the number of `comparisons`.
- If the loop finishes and `x` is not found, return -1 (or ‘not found’) and the number of `comparisons`.
Counting Comparison Operations:
In linear search, every element is compared with the target value.
- Best Case: If the desired element is at the very first position in the list, only 1 comparison will be required.
- Average Case: On average, if the element is present in the list, about n/2 comparisons will be required.
- Worst Case: If the element is at the end of the list or not present in the list at all, it will be compared with all `n` elements. Thus, n comparisons will be required.
For example, to find 30 in the list {10, 50, 30, 70}, the algorithm will compare 10, 50, and then 30. It will be found at position 2 after 3 comparisons .
(c) Description of any three
(i) Greedy Technique
The greedy technique is an algorithmic paradigm that solves problems by making locally optimal choices at each stage with the hope that these local optimal choices will lead to a globally optimal solution. A greedy algorithm does not consider all possible solutions for a problem; instead, it makes one choice at a time that looks best at the moment. It does not always guarantee a globally optimal solution, but for some problems, such as Kruskal’s and Prim’s algorithms (Minimum Spanning Tree) and Dijkstra’s algorithm (single source shortest path), it does.
(ii) Cycle in a Directed Graph
A cycle in a directed graph (digraph) is a path that starts and ends at the same vertex. Formally, it is a non-empty sequence of vertices v 0 , v 1 , …, v k-1 , v k where v 0 = v k and there is a directed edge from v i to v i+1 for each i from 0 to k-1. Detecting cycles is crucial in many applications, such as topological sorting in a graph (not possible in a cyclic graph), and detecting deadlocks in operating systems. A common method to detect a cycle is by using Depth-First Search (DFS). During a DFS, if we visit a vertex that is already present in the current recursion stack, a cycle is detected.
(iv) Branch and Bound
Branch and Bound is an algorithm design paradigm generally used for combinatorial optimization problems. It is a state space search technique. The method involves two main processes:
- Branching: Dividing the problem into smaller sub-problems. This creates a state-space tree.
- Bounding: For each sub-problem (a node in the tree), we calculate a bound (upper or lower) on the best possible solution that can be obtained within that sub-problem. If this bound is worse than the best solution found so far, then the entire sub-tree does not need to be explored further, and we can “prune” that branch.
It systematically searches for a solution and significantly reduces the search space through pruning. It is used to solve problems like the Traveling Salesman Problem (TSP).
(d) Using the Master Theorem
The Master Theorem is used to provide asymptotic bounds for recurrence relations of the form T(n) = aT(n/b) + f(n). We compare n log b a with f(n).
(i) T(n) = 4T(n/2) + n 2
- Here, a = 4, b = 2, and f(n) = n 2 .
- Calculate n log b a = n log 2 4 = n 2 .
- Since f(n) = n 2 = Θ(n log b a ), we apply Case 2 of the Master Theorem.
- Therefore, T(n) = Θ(n log b a log n) = Θ(n 2 log n) .
(ii) T(n) = 2T(n/3) + n√n
- Here, a = 2, b = 3, and f(n) = n√n = n 1.5 .
- Calculate n log b a = n log 3 2 ≈ n 0.63 .
- Since f(n) = n 1.5 , f(n) = Ω(n log 3 2 + ε ) for some ε > 0 (e.g., ε = 0.87). This points to Case 3 .
- We must check the regularity condition: a f(n/b) ≤ c f(n) for some c < 1. 2 f(n/3) = 2 (n/3) 1.5 = 2 n 1.5 / 3 1.5 = (2 / 3√3) n 1.5 . Since c = 2 / (3√3) ≈ 0.385, which is less than 1, the condition holds.
- Therefore, T(n) = Θ(f(n)) = Θ(n√n) .
Q2. (a) State Euclid’s algorithm and find GCD (595, 252) by this algorithm. (5) (b) Write the iterations for sorting the following list of numbers using bubble sort and selection sort : 5 45, 67, 3, 90, 2, 35, 28, 0
Ans.
(a) Euclid’s Algorithm and GCD(595, 252)
Euclid’s Algorithm is an efficient method for computing the greatest common divisor (GCD) of two integers, the largest number that divides them both without leaving a remainder.
The Algorithm:
Given two non-negative integers `a` and `b`, where `a >= b`:
- If `b` is 0, then the GCD is `a`.
- Otherwise, the GCD of `a` and `b` is the same as the GCD of `b` and `a mod b` (the remainder when `a` is divided by `b`).
The recursive formula is: GCD(a, b) = GCD(b, a % b) .
Calculation of GCD(595, 252):
We apply the algorithm step-by-step:
- Step 1: GCD(595, 252) 595 = 2 * 252 + 91. The remainder is 91. So, we find GCD(252, 91).
- Step 2: GCD(252, 91) 252 = 2 * 91 + 70. The remainder is 70. So, we find GCD(91, 70).
- Step 3: GCD(91, 70) 91 = 1 * 70 + 21. The remainder is 21. So, we find GCD(70, 21).
- Step 4: GCD(70, 21) 70 = 3 * 21 + 7. The remainder is 7. So, we find GCD(21, 7).
- Step 5: GCD(21, 7) 21 = 3 * 7 + 0. The remainder is 0. So, we find GCD(7, 0).
- Step 6: According to the algorithm, when the second number is 0, the GCD is the first number.
Therefore, the GCD(595, 252) is 7 .
(b) Sorting Iterations
Original List: [45, 67, 3, 90, 2, 35, 28, 0]
Bubble Sort Iterations:
In each pass, the largest unsorted element “bubbles” to its correct position at the end of the unsorted part.
- Pass 1: [45, 3, 67, 2, 90, 28, 35, 0] -> [45, 3, 2, 67, 28, 35, 0, 90 ] After many swaps, 90 is at the end. (Simplified view: compare adjacent, swap if needed) Correct Pass 1 Trace: [ 45, 67 , 3, 90, 2, 35, 28, 0] -> No swap [45, 67, 3 , 90, 2, 35, 28, 0] -> [45, 3, 67 , 90, 2, 35, 28, 0] …and so on. The final list after pass 1: [45, 3, 67, 2, 35, 28, 0, 90 ]
- Pass 2: [3, 45, 2, 67, 28, 0, 35 , 90] (67 moves to its position) [3, 2, 45, 28, 0, 35, 67 , 90]
- Pass 3: [2, 3, 28, 0, 35, 45 , 67, 90]
- Pass 4: [2, 3, 0, 28, 35 , 45, 67, 90]
- Pass 5: [2, 0, 3, 28 , 35, 45, 67, 90]
- Pass 6: [0, 2, 3 , 28, 35, 45, 67, 90]
- Pass 7: [0, 2 , 3, 28, 35, 45, 67, 90] (List is now sorted, but algorithm may do one more pass)
Final Sorted List: [0, 2, 3, 28, 35, 45, 67, 90]
Selection Sort Iterations:
In each pass, find the minimum element in the unsorted part and swap it with the first element of the unsorted part.
- Pass 1: Find min (0). Swap with 45. [ 0 , 67, 3, 90, 2, 35, 28, 45]
- Pass 2: Unsorted: [67, 3, 90, 2, 35, 28, 45]. Find min (2). Swap with 67. [0, 2 , 3, 90, 67, 35, 28, 45]
- Pass 3: Unsorted: [3, 90, 67, 35, 28, 45]. Min is 3 (already in place). No swap. [0, 2, 3 , 90, 67, 35, 28, 45]
- Pass 4: Unsorted: [90, 67, 35, 28, 45]. Find min (28). Swap with 90. [0, 2, 3, 28 , 67, 35, 90, 45]
- Pass 5: Unsorted: [67, 35, 90, 45]. Find min (35). Swap with 67. [0, 2, 3, 28, 35 , 67, 90, 45]
- Pass 6: Unsorted: [67, 90, 45]. Find min (45). Swap with 67. [0, 2, 3, 28, 35, 45 , 90, 67]
- Pass 7: Unsorted: [90, 67]. Find min (67). Swap with 90. [0, 2, 3, 28, 35, 45, 67 , 90]
Final Sorted List: [0, 2, 3, 28, 35, 45, 67, 90]
Q3. (a) Differentiate between Kruskal’s algorithm and Prim’s algorithm. (4) (b) Apply Dijkstra’s algorithm to find shortest path from source vertex A to each of the other vertices of the following directed graph : (6)
Ans.
(a) Difference between Kruskal’s and Prim’s Algorithm
Both Kruskal’s and Prim’s algorithms are greedy algorithms used to find the Minimum Spanning Tree (MST) of a connected, undirected, and weighted graph. However, they approach the problem differently.
Feature |
Kruskal’s Algorithm |
Prim’s Algorithm |
|---|---|---|
Core Idea |
It is an edge-based algorithm. It selects the edge with the minimum weight from the entire graph, as long as it does not form a cycle. | It is a vertex-based algorithm. It starts from an arbitrary vertex and grows the MST by adding the cheapest edge connecting a vertex in the MST to a vertex outside the MST. |
Intermediate State |
The intermediate graph is a forest (a collection of disjoint trees). At the end, the forest becomes a single tree (the MST). | The intermediate graph is always a single connected tree. |
Cycle Detection |
Requires a mechanism (like the Union-Find data structure) to check if adding an edge will form a cycle. | Does not need explicit cycle detection, as it only adds edges connecting a visited vertex to an unvisited one, which can never form a cycle. |
Data Structure |
Typically uses a Disjoint Set Union (DSU) data structure for efficient cycle detection. | Typically uses a Priority Queue (like a min-heap) to efficiently select the minimum weight edge to an unvisited vertex. |
Complexity |
O(E log E) or O(E log V), depending on sorting edges and DSU operations. Works well on sparse graphs (where E is close to V). |
O(E log V) with a binary heap, or O(E + V log V) with a Fibonacci heap. Works well on dense graphs (where E is close to V 2 ). |
(b) Dijkstra’s Algorithm Application
The goal is to find the shortest path from the source vertex A to all other vertices. We will maintain a set of visited vertices and a distance array `dist` which stores the shortest distance found so far from A.
The graph has vertices {A, B, C, D, E} and the following edges:
- A → B (3)
- A → D (5)
- B → C (10)
- B → D (2)
- C → E (4)
- D → C (1)
- D → E (7)
Let `dist` be an array for distances and `prev` for path reconstruction. Initialize: `dist = {A:0, B:∞, C:∞, D:∞, E:∞}`, `visited = {}`
Step 1: Visit A (Current Vertex)
- `visited = {A}`
- Update neighbors of A:
- dist(B) = min(∞, dist(A) + 3) = 3. prev(B) = A.
- dist(D) = min(∞, dist(A) + 5) = 5. prev(D) = A.
- `dist` becomes: {A:0, B:3, C:∞, D:5, E:∞}
Step 2: Visit B (Next smallest distance is 3)
- `visited = {A, B}`
- Update neighbors of B:
- dist(C) = min(∞, dist(B) + 10) = 13. prev(C) = B.
- dist(D) = min(5, dist(B) + 2) = min(5, 5) = 5. No change.
- `dist` becomes: {A:0, B:3, C:13, D:5, E:∞}
Step 3: Visit D (Next smallest distance is 5)
- `visited = {A, B, D}`
- Update neighbors of D:
- dist(C) = min(13, dist(D) + 1) = min(13, 6) = 6. prev(C) = D.
- dist(E) = min(∞, dist(D) + 7) = min(∞, 12) = 12. prev(E) = D.
- `dist` becomes: {A:0, B:3, C:6, D:5, E:12}
Step 4: Visit C (Next smallest distance is 6)
- `visited = {A, B, D, C}`
- Update neighbors of C:
- dist(E) = min(12, dist(C) + 4) = min(12, 10) = 10. prev(E) = C.
- `dist` becomes: {A:0, B:3, C:6, D:5, E:10}
Step 5: Visit E (Last remaining vertex, distance 10)
- `visited = {A, B, D, C, E}`
- E has no outgoing edges. No updates.
Final Shortest Paths from A:
- To B: Path A → B. Cost = 3
- To C: Path A → D → C. Cost = 5 + 1 = 6
- To D: Path A → D. Cost = 5
- To E: Path A → D → C → E. Cost = 5 + 1 + 4 = 10
Q4. (a) Consider the following sorted array A with 8 elements : 7, 5, 9, 27, 30, 48, 56, 59, 75, 87, 94 20, 7 Illustrate the working of Binary search algorithm while searching for ITEM: (5) (i) 9 (ii) 94 (b) Show that the runtime of QUICK SORT algorithm in the best case is O(n log n). (5)
Ans.
(a) Binary Search Illustration
Note: The question provides a garbled, unsorted list of numbers but states it is a sorted array. For binary search to work, the array must be sorted. We will assume the intended array is a sorted collection of these numbers:
Let the sorted array A be: [5, 7, 9, 20, 27, 30, 48, 56, 59, 75, 87, 94] . The indices range from 0 to 11 (n=12).
The binary search algorithm works by repeatedly dividing the search interval in half. We use three pointers: `low`, `high`, and `mid`.
(i) Searching for ITEM = 9
- Iteration 1:
- `low = 0`, `high = 11`
- `mid = floor((0 + 11) / 2) = 5`
- Compare ITEM with `A[mid]`: `A[5]` is 30.
- Since `9 < 30`, the item must be in the left half. Update `high`.
- `high = mid – 1 = 4`
- Iteration 2:
- `low = 0`, `high = 4`
- `mid = floor((0 + 4) / 2) = 2`
- Compare ITEM with `A[mid]`: `A[2]` is 9.
- Since `9 == 9`, the item is found at index 2.
The element 9 is found at index 2 .
(ii) Searching for ITEM = 94
- Iteration 1:
- `low = 0`, `high = 11`
- `mid = floor((0 + 11) / 2) = 5`
- Compare ITEM with `A[mid]`: `A[5]` is 30.
- Since `94 > 30`, the item must be in the right half. Update `low`.
- `low = mid + 1 = 6`
- Iteration 2:
- `low = 6`, `high = 11`
- `mid = floor((6 + 11) / 2) = 8`
- Compare ITEM with `A[mid]`: `A[8]` is 59.
- Since `94 > 59`, the item must be in the right half. Update `low`.
- `low = mid + 1 = 9`
- Iteration 3:
- `low = 9`, `high = 11`
- `mid = floor((9 + 11) / 2) = 10`
- Compare ITEM with `A[mid]`: `A[10]` is 87.
- Since `94 > 87`, the item must be in the right half. Update `low`.
- `low = mid + 1 = 11`
- Iteration 4:
- `low = 11`, `high = 11`
- `mid = floor((11 + 11) / 2) = 11`
- Compare ITEM with `A[mid]`: `A[11]` is 94.
- Since `94 == 94`, the item is found at index 11.
The element 94 is found at index 11 .
(b) Quick Sort Best Case Runtime Analysis
The runtime of Quick Sort is highly dependent on the choice of the pivot element and the resulting partitions.
The Best Case Scenario:
The best case for Quick Sort occurs when the `partition` process always splits the array into two sub-arrays of almost equal size. This happens when the pivot element chosen is the median of the array. In this ideal scenario, an array of size `n` is divided into two sub-arrays of size approximately `n/2`.
Recurrence Relation:
Let T(n) be the time taken to sort an array of size `n`.
- Divide: The `partition` step takes linear time to rearrange the elements around the pivot. This is O(n) .
- Conquer: The algorithm recursively sorts the two sub-arrays. In the best case, the sizes are `n/2` and `n/2`. The time taken is 2T(n/2) .
- Combine: No work is needed to combine the sub-arrays, as the sorting is done in-place. This is O(1) .
This leads to the recurrence relation for the best case: T(n) = 2T(n/2) + O(n)
Solving the Recurrence:
We can solve this recurrence using the Master Theorem or by drawing a recurrence tree.
Using the Master Theorem (T(n) = aT(n/b) + f(n)):
- a = 2, b = 2, f(n) = n
- We compare f(n) with n log b a = n log 2 2 = n 1 .
- Since f(n) = O(n) and n log b a = O(n), we are in Case 2 of the Master Theorem.
- Therefore, T(n) = Θ(n log b a log n) = Θ(n log n) .
Using the Recurrence Tree method:
- The root of the tree has a cost of `cn`.
- The next level has 2 subproblems of size `n/2`, with a total cost of 2 * c(n/2) = `cn`.
- The next level has 4 subproblems of size `n/4`, with a total cost of 4 * c(n/4) = `cn`.
- This continues until the subproblem size is 1. The height of the tree is log 2 n .
- At each of the `log n` levels, the total work done is `cn`.
- Total work = (work per level) (number of levels) = `cn log n`.
Thus, the best-case time complexity of Quick Sort is O(n log n) .
Q5. (a) Multiply 026782 x 0732920 using divide and conquer technique using Karatsuba method. (5) (b) Write an algorithm for Depth First Search (DFS) and traverse the following graph using DFS. Starting vertex is A: (5)
Ans.
(a) Karatsuba Multiplication
Note: The numbers in the question, `026782` and `0732920`, have subscripts `2` which are likely typos, and the second number contains a `9` which makes it invalid in base 8 (octal). We will assume the question intended to multiply two decimal integers of equal length. Let’s use `2678` and `7329` as a 4-digit example, which is standard for demonstrating this method.
We want to compute 2678 × 7329 .
Let X = 2678 and Y = 7329. Here n = 4.
1. Split the numbers into two halves (n/2 = 2):
- X = 26 × 10 2 + 78. So, a = 26, b = 78 .
- Y = 73 × 10 2 + 29. So, c = 73, d = 29 .
2. Perform three recursive multiplications:
- p1 = a × c = 26 × 73 = 1898
- p2 = b × d = 78 × 29 = 2262
- p3 = (a + b) × (c + d) = (26 + 78) × (73 + 29) = 104 × 102 = 10608
3. Calculate the middle term (ad + bc):
- ad + bc = p3 – p1 – p2
- ad + bc = 10608 – 1898 – 2262 = 6448
4. Combine the results: The final product is given by the formula: p1 × 10 n + (ad + bc) × 10 n/2 + p2
- = 1898 × 10 4 + 6448 × 10 2 + 2262
- = 18980000 + 644800 + 2262
- = 19624800 + 2262
- = 19627062
Checking with direct multiplication: 2678 × 7329 = 19627062. The result is correct.
(b) Depth First Search (DFS) Algorithm and Traversal
DFS Algorithm (Recursive):
We need a `visited` array or set to keep track of visited vertices to avoid cycles and redundant processing.
DFS(graph, vertex u, visited): 1. Mark u as visited. 2. Print or process vertex u. 3. For each neighbor v of u in graph: 4. If v has not been visited: 5. DFS(graph, v, visited)
Initial Call: To start the traversal from a source vertex `s`: 1. Create a `visited` array/set, initialized to false/empty for all vertices. 2. Call `DFS(graph, s, visited)`.
Traversal of the Given Graph:
The graph has vertices {A, B, C, D} and edges connecting them. Assuming the graph is undirected based on the drawing:
- A is connected to B, C
- B is connected to A, D
- C is connected to A, D
- D is connected to B, C
We will traverse starting from vertex A . We will assume alphabetical order when choosing the next neighbor to visit.
Steps:
- Start at A. Call `DFS(A)`.
- Mark A as visited. Print A.
- Visited: {A}. Traversal Order: A
- Explore neighbors of A. Let’s pick B first.
- Call `DFS(B)`. Mark B as visited. Print B.
- Visited: {A, B}. Traversal Order: A, B
- Explore neighbors of B. A is already visited. Pick D .
- Call `DFS(D)`. Mark D as visited. Print D.
- Visited: {A, B, D}. Traversal Order: A, B, D
- Explore neighbors of D. B is visited. Pick C .
- Call `DFS(C)`. Mark C as visited. Print C.
- Visited: {A, B, D, C}. Traversal Order: A, B, D, C
- Explore neighbors of C. Both A and D are visited. `DFS(C)` returns.
- Back to `DFS(D)`. All neighbors of D (B, C) have been visited. `DFS(D)` returns.
- Back to `DFS(B)`. All neighbors of B (A, D) have been visited. `DFS(B)` returns.
- Back to `DFS(A)`. Next neighbor of A is C , but C is already visited.
- All neighbors of A have been processed. `DFS(A)` finishes.
The Depth First Search traversal starting from vertex A is: A → B → D → C .
Download IGNOU previous Year Question paper download PDFs for BCS-042 to improve your preparation. These ignou solved question paper IGNOU Previous Year Question paper solved PDF in Hindi and English help you understand the exam pattern and score better.
Thanks!
Leave a Reply