Header-C
Header-C

 

 

The Interpolation Search is an improved variant of binary search. It works on the principle of estimating the position of the target value based on the value’s range in a sorted array. This method is more efficient for uniformly distributed data.

Program Structure


#include 

// Function to perform interpolation search
int interpolationSearch(int arr[], int size, int target) {
    int low = 0, high = size - 1;

    while (low <= high && target >= arr[low] && target <= arr[high]) {
        // Estimate the position of the target value
        int pos = low + ((target - arr[low]) * (high - low)) / (arr[high] - arr[low]);

        // Check if the target value is at the estimated position
        if (arr[pos] == target) {
            return pos;  // Target found
        }

        // If target is greater, ignore left half
        if (arr[pos] < target) {
            low = pos + 1;
        } else {  // If target is smaller, ignore right half
            high = pos - 1;
        }
    }
    return -1;  // Target not found
}

// Main function
int main() {
    int arr[] = {10, 20, 30, 40, 50, 60, 70, 80, 90, 100};
    int size = sizeof(arr) / sizeof(arr[0]);
    int target = 70;

    int result = interpolationSearch(arr, size, target);
    if (result != -1) {
        printf("Element found at index: %d\n", result);
    } else {
        printf("Element not found.\n");
    }

    return 0;
}

Documentation

Function: interpolationSearch

Parameters:

  • int arr[]: The sorted array of integers in which to search.
  • int size: The number of elements in the array.
  • int target: The value to search for.

Returns: The index of the target if found, otherwise returns -1.

Main Function

In the main function, a sample sorted array is defined. The interpolationSearch function is called with this array and a target value. The result is printed to the console.

Complexity Analysis

The average case time complexity of interpolation search is O(log log n) for uniformly distributed data. However, the worst-case time complexity can degrade to O(n) for non-uniformly distributed data.

 

By Aditya Bhuyan

I work as a cloud specialist. In addition to being an architect and SRE specialist, I work as a cloud engineer and developer. I have assisted my clients in converting their antiquated programmes into contemporary microservices that operate on various cloud computing platforms such as AWS, GCP, Azure, or VMware Tanzu, as well as orchestration systems such as Docker Swarm or Kubernetes. For over twenty years, I have been employed in the IT sector as a Java developer, J2EE architect, scrum master, and instructor. I write about Cloud Native and Cloud often. Bangalore, India is where my family and I call home. I maintain my physical and mental fitness by doing a lot of yoga and meditation.

Leave a Reply

Your email address will not be published. Required fields are marked *

error

Enjoy this blog? Please spread the word :)