Murat Umutlu

Algorithms

Algorithm is a set of instructions that a computer can follow to solve a particular problem or accomplish a specific task. These instructions are typically written in a programming language and can be thought of as a step-by-step procedure for performing a computation.

Sorting Algorithms

Sorting algorithms are an important concept in computer science and programming. They provide a way to rearrange a collection of data elements into a specific order. Sorting algorithms are used in many different applications, such as searching, data compression, and data analysis. There are many different types of sorting algorithms, each with its own strengths and weaknesses, and each designed to solve specific problems.
In general, a sorting algorithm is a way of rearranging a collection of data elements into a specific order. The choice of sorting algorithm depends on the specific problem that needs to be solved, as well as the constraints and requirements of the system.
There are many different types of sorting algorithms, but here are some of the most commonly used ones:

Bubble Sort

Bubble sort is a simple sorting algorithm that repeatedly steps through the list, compares adjacent elements and swaps them if they are in the wrong order. The pass through the list is repeated until the list is sorted.
1function bubbleSort(arr) {
2
3const len = arr.length;
4  for (let i = 0; i < len - 1; i++) {
5    for (let j = 0; j < len - i - 1; j++) {
6      if (arr[j] > arr[j + 1]) {
7        [arr[j], arr[j + 1]] = [arr[j + 1], arr[j]];
8      }
9    }
10  }
11  
12  return arr;
13}
14      
15// example usage
16const arr = [64, 34, 25, 12, 22, 11, 90];
17console.log(bubbleSort(arr)); // [11, 12, 22, 25, 34, 64, 90]
In this implementation, we first define a bubbleSort function that takes an array as its parameter. We then use two nested loops to iterate through the array and compare adjacent elements. If two adjacent elements are in the wrong order, we swap them using destructuring assignment. We repeat this process until the array is sorted and return the sorted array.
We then provide an example usage of the bubbleSort function, where we create an unsorted array of numbers and call the bubbleSort function to sort the array. Finally, we log the sorted array to the console.

Selection Sort

Selection sort is an in-place comparison sorting algorithm that sorts an array by repeatedly finding the minimum element from unsorted part and putting it at the beginning.
1function selectionSort(arr) {
2  for (let i = 0; i < arr.length - 1; i++) {
3    let minIndex = i;
4    for (let j = i + 1; j < arr.length; j++) {
5      if (arr[j] < arr[minIndex]) {
6        minIndex = j;
7      }
8    }
9    if (minIndex !== i) {
10      let temp = arr[i];
11      arr[i] = arr[minIndex];
12      arr[minIndex] = temp;
13    }
14  }
15  return arr;
16}
17
18const arr = [64, 25, 12, 22, 11];
19console.log(selectionSort(arr)); // Output: [11, 12, 22, 25, 64]
20      
In this example, we start with an unsorted array arr of integers. The selectionSort function uses two nested loops to iterate over the array and find the minimum element from the unsorted part.
The outer loop iterates through the array from the first to the second to last element. The inner loop iterates through the unsorted part of the array from the next element to the end.
The inner loop compares each element to the current minimum element (arr[minIndex]) and updates the minimum index (minIndex) if it finds a smaller element.
Once the inner loop is complete, the function checks if the minimum index has changed. If it has, it swaps the minimum element with the first element of the unsorted part of the array.
The function repeats this process until the entire array is sorted, then returns the sorted array.

Insertion Sort

Insertion sort is another simple sorting algorithm that builds the final sorted array one item at a time. It is much less efficient on large lists than more advanced algorithms such as quicksort, heapsort, or merge sort.
1function insertionSort(arr) {
2  for (let i = 1; i < arr.length; i++) {
3    let current = arr[i];
4    let j = i - 1;
5    while (j >= 0 && arr[j] > current) {
6      arr[j + 1] = arr[j];
7      j--;
8    }
9    arr[j + 1] = current;
10  }
11  return arr;
12}
13
14let unsortedArray = [7, 2, 4, 1, 5, 3];
15let sortedArray = insertionSort(unsortedArray);
16console.log(sortedArray); // [1, 2, 3, 4, 5, 7]
17      
In this implementation, we start with the second element of the array and compare it to the elements before it to find its correct position. We do this by creating a variable called current to store the value of the current element, and a variable called j to keep track of the index of the element before current.
We then loop through the elements before current (starting at j = i - 1) and shift each element one position to the right until we find an element that is less than or equal to current, at which point we insert current at the next index.
We continue this process until we have gone through all the elements in the array, resulting in a sorted array.

Merge Sort

Merge sort is a divide-and-conquer algorithm that works by dividing the input array into smaller arrays, sorting these smaller arrays, and then merging them back into a single, sorted array.
1function mergeSort(array) {
2  if (array.length <= 1) {
3    return array;
4  }
5  
6  const midIndex = Math.floor(array.length / 2);
7  const leftArray = array.slice(0, midIndex);
8  const rightArray = array.slice(midIndex);
9
10  const sortedLeftArray = mergeSort(leftArray);
11  const sortedRightArray = mergeSort(rightArray);
12
13  return merge(sortedLeftArray, sortedRightArray);
14}
15
16function merge(leftArray, rightArray) {
17  const mergedArray = [];
18
19  let leftIndex = 0;
20  let rightIndex = 0;
21
22  while (leftIndex < leftArray.length && rightIndex < rightArray.length) {
23    if (leftArray[leftIndex] < rightArray[rightIndex]) {
24      mergedArray.push(leftArray[leftIndex]);
25      leftIndex++;
26    } else {
27      mergedArray.push(rightArray[rightIndex]);
28      rightIndex++;
29    }
30  }
31
32  return mergedArray.concat(leftArray.slice(leftIndex)).concat(rightArray.slice(rightIndex));
33}
34      
This implementation uses recursion to divide the input array into smaller arrays, sorts them recursively, and then merges them back together using a merge function. The merge function iteratively compares the elements of the two input arrays and pushes the smaller element to the output array, until one of the input arrays has been completely processed. Then, it appends the remaining elements of the other input array to the output array.

Quick Sort

Quick sort is another divide-and-conquer algorithm that works by selecting a 'pivot' element from the array and partitioning the other elements into two sub-arrays, according to whether they are less than or greater than the pivot.
1function quickSort(arr) {
2  if (arr.length <= 1) {
3    return arr;
4  }
5
6  const pivot = arr[0];
7  const left = [];
8  const right = [];
9
10  for (let i = 1; i < arr.length; i++) {
11    if (arr[i] < pivot) {
12      left.push(arr[i]);
13    } else {
14      right.push(arr[i]);
15    }
16  }
17
18  return [...quickSort(left), pivot, ...quickSort(right)];
19}
20
21const arr = [5, 3, 8, 4, 2, 1, 9, 6, 7];
22const sortedArr = quickSort(arr);
23console.log(sortedArr); // Output: [1, 2, 3, 4, 5, 6, 7, 8, 9]
24      
In this implementation, we first check if the array has only one element or less, in which case it is already sorted and we can return it. Otherwise, we choose the first element of the array as the pivot and create two empty arrays, left and right, to hold the elements that are smaller than and greater than the pivot, respectively.
We then iterate through the rest of the elements in the array, comparing them to the pivot and adding them to the appropriate sub-array. Finally, we recursively call quickSort on the left and right sub-arrays, and concatenate them along with the pivot to create the final sorted array.
Note that this implementation is not very efficient for very large arrays, as it has a worst-case time complexity of O(n^2) if the pivot is consistently chosen poorly. However, it has an average case time complexity of O(n log n), which is quite good. There are also ways to optimize the choice of pivot to improve performance.

Searching Algorithms

Searching algorithms are used to find specific items in a collection of data. They are an essential part of computer science and programming, and are used in many different applications, such as searching a database, finding a specific item in a list, or searching through a text file.
There are many different types of searching algorithms, each with its own strengths and weaknesses, and each designed to solve specific problems. Some algorithms are faster than others, some are more efficient in certain situations, and some are more complex than others.
In general, a searching algorithm is a way of finding a specific item in a collection of data.

Linear Search

Linear search, also known as sequential search, is a simple searching algorithm that checks every element in a list until it finds the item it is looking for.
1function linearSearch(arr, target) {
2  for (let i = 0; i < arr.length; i++) {
3    if (arr[i] === target) {
4      return i; // Return the index of the target element
5    }
6  }
7  return -1; // Target element not found in array
8}
9
10// Example usage
11const array = [5, 2, 4, 6, 1, 3];
12const target = 4;
13console.log(linearSearch(array, target)); // Output: 2
In this example, the linearSearch function takes an array and a target value as arguments, and returns the index of the target element if it is found in the array. If the target element is not found, the function returns -1. The function uses a for loop to iterate through each element of the array, and uses an if statement to compare each element with the target value. If a match is found, the index of the element is returned.

Binary Search

Binary search is a faster searching algorithm that only works on sorted lists. It works by repeatedly dividing the list in half until it finds the item it is looking for.
1
2function binarySearch(arr, target) {
3  let left = 0;
4  let right = arr.length - 1;
5  
6  while (left <= right) {
7    const mid = Math.floor((left + right) / 2);
8    
9    if (arr[mid] === target) {
10      return mid;
11    } else if (arr[mid] < target) {
12      left = mid + 1;
13    } else {
14      right = mid - 1;
15    }
16  }
17  
18  return -1; // target not found
19}
20      
In this implementation, the binarySearch function takes an array and a target value as parameters and returns the index of the target value in the array if it exists, or -1 if it does not. The function initializes the left and right endpoints of the search interval to the beginning and end of the array, respectively. Then, it enters a loop that continues as long as the search interval is not empty. In each iteration of the loop, the function computes the midpoint of the search interval and compares the value at that index to the target value. If the value at the midpoint is equal to the target value, the function returns the index of the midpoint. Otherwise, if the value at the midpoint is less than the target value, the function updates the left endpoint of the search interval to be one index greater than the midpoint. Conversely, if the value at the midpoint is greater than the target value, the function updates the right endpoint of the search interval to be one index less than the midpoint. The loop continues until either the target value is found or the search interval is empty, at which point the function returns -1 to indicate that the target value was not found in the array.

Interpolation Search

Interpolation search is another faster searching algorithm that works on sorted lists. It works by using interpolation to guess where the item might be, and then checks that position to see if the item is there.
1function interpolationSearch(array, target) {
2  let low = 0;
3  let high = array.length - 1;
4
5  while (low <= high && target >= array[low] && target <= array[high]) {
6    let position = Math.floor(
7      low + ((target - array[low]) * (high - low)) / (array[high] - array[low])
8    );
9
10    if (array[position] === target) {
11      return position;
12    }
13
14    if (array[position] < target) {
15      low = position + 1;
16    } else {
17      high = position - 1;
18    }
19  }
20
21  return -1;
22}
23      
In this implementation, the array parameter is the sorted array being searched, and the target parameter is the value being searched for.
The low and high variables are used to keep track of the endpoints of the current search range.
The while loop is used to iterate over the array until the target value is found or the search range is exhausted. The loop condition ensures that the target value is within the current search range and that the search range has not been exhausted.
The position variable calculates the estimated position of the target value in the array based on its value and the values of the endpoints of the search range.
The first if statement checks if the estimated position contains the target value, in which case it returns the position.
If the target value is greater than the value at the estimated position, the search range is narrowed to the upper half of the remaining array by updating low to be position + 1.
If the target value is less than the value at the estimated position, the search range is narrowed to the lower half of the remaining array by updating high to be position - 1.
If the target value is not found, the function returns -1 to indicate that the value was not found in the array.
Overall, interpolation search is a more efficient search algorithm than linear search and can be more efficient than binary search when the values in the array are uniformly distributed.

Hashing

Hashing is a searching algorithm that uses a hash function to generate a unique key for each item in a collection of data. The key is used to quickly find the item without having to search through the entire collection.
1const hashTable = new Map();
2
3function hashKey(key) {
4  // hash function to generate a unique key for the data
5  return key.toString().length;
6}
7
8function addData(key, value) {
9  const hash = hashKey(key);
10  hashTable.set(hash, value);
11}
12
13function getData(key) {
14  const hash = hashKey(key);
15  return hashTable.get(hash);
16}
17
18addData("hello", "world");
19addData("foo", "bar");
20
21console.log(getData("hello")); // output: world
22console.log(getData("foo")); // output: bar
23      
In this example, the hashKey function is used to generate a unique key for each piece of data, based on the length of the key. The addData function uses the set method of the Map object to add the data to the hash table, using the generated key. The getData function uses the get method of the Map object to retrieve the data from the hash table, using the generated key.
Hashing can also be used in cryptography, where it is used to generate digital signatures and verify data integrity. In cryptography, a hash function is used to generate a fixed-length digital signature for a piece of data, which can be used to verify its authenticity and integrity.

Graph Algorithms

Graph algorithms are used to solve problems that involve relationships between objects. Graphs are collections of nodes (also known as vertices) and edges, where edges represent the relationships between the nodes. Graph algorithms can be used to solve a wide variety of problems, such as finding the shortest path between two nodes, finding the minimum spanning tree of a graph, or finding cycles in a graph.
There are many different types of graph algorithms, each with its own strengths and weaknesses, and each designed to solve specific problems.
In general, a graph algorithm is a way of analyzing the relationships between nodes in a graph.

Breadth-First Search

Breadth-first search is a searching algorithm that starts at the root node and explores all the neighboring nodes at the current depth before moving on to nodes at the next depth.
1function BFS(graph, startNode) {
2  const queue = [startNode];
3  const visited = new Set();
4
5  while (queue.length > 0) {
6    const currentNode = queue.shift();
7    visited.add(currentNode);
8
9    const neighbors = graph[currentNode];
10    for (let neighbor of neighbors) {
11      if (!visited.has(neighbor)) {
12        queue.push(neighbor);
13        visited.add(neighbor);
14      }
15    }
16  }
17
18  return visited;
19}
20      
Here, the BFS function takes in two parameters - graph which is an object representing the graph, and startNode which is the node to start the BFS from.
We create a queue and a visited set to keep track of the nodes that we need to visit and the nodes that we have already visited respectively.
We start by adding the startNode to the queue and marking it as visited.
Then, we loop through the queue as long as it is not empty. For each iteration, we take out the first node from the queue, mark it as visited, and get its neighbors from the graph.
We loop through each neighbor and if it has not been visited before, we add it to the queue and mark it as visited.
Finally, we return the visited set which contains all the nodes visited during the BFS traversal.
Note that the graph object represents an adjacency list, where each key is a node and its value is an array of its neighbors. For example:
1const graph = {
2  A: ["B", "C"],
3  B: ["A", "D", "E"],
4  C: ["A", "F"],
5  D: ["B"],
6  E: ["B", "F"],
7  F: ["C", "E"],
8};
In the above graph, node A has neighbors B and C, node B has neighbors A, D, and E, and so on.

Depth-First Search

Depth-first search is another searching algorithm that explores as far as possible along each branch before backtracking.
1// Node class representing each node in the graph
2class Node {
3  constructor(value) {
4    this.value = value;
5    this.adjacents = [];
6    this.visited = false;
7  }
8}
9
10// Graph class representing the entire graph
11class Graph {
12  constructor() {
13    this.nodes = [];
14  }
15
16  // Add a new node to the graph
17  addNode(value) {
18    const node = new Node(value);
19    this.nodes.push(node);
20    return node;
21  }
22
23  // Add an edge between two nodes in the graph
24  addEdge(node1, node2) {
25    node1.adjacents.push(node2);
26    node2.adjacents.push(node1);
27  }
28
29  // Depth-First Search algorithm
30  dfs(startNode) {
31    const stack = [startNode];
32    startNode.visited = true;
33    while (stack.length > 0) {
34      const currentNode = stack.pop();
35      console.log(currentNode.value);
36      for (let i = 0; i < currentNode.adjacents.length; i++) {
37        const adjacentNode = currentNode.adjacents[i];
38        if (!adjacentNode.visited) {
39          adjacentNode.visited = true;
40          stack.push(adjacentNode);
41        }
42      }
43    }
44  }
45}
46
47// Create a new graph
48const graph = new Graph();
49
50// Add nodes to the graph
51const nodeA = graph.addNode("A");
52const nodeB = graph.addNode("B");
53const nodeC = graph.addNode("C");
54const nodeD = graph.addNode("D");
55const nodeE = graph.addNode("E");
56
57// Add edges to the graph
58graph.addEdge(nodeA, nodeB);
59graph.addEdge(nodeA, nodeC);
60graph.addEdge(nodeB, nodeD);
61graph.addEdge(nodeC, nodeE);
62
63// Perform Depth-First Search starting from nodeA
64graph.dfs(nodeA);
65      
In this code, we have a Node class representing each node in the graph, and a Graph class representing the entire graph. We first create a graph object and then add nodes and edges to it.
The dfs function is a Depth-First Search algorithm that takes a starting node as input. We use a stack to keep track of nodes to visit, and mark each node as visited as we traverse the graph. The console.log(currentNode.value) line prints the value of each node as we visit it.
In the end, we call graph.dfs(nodeA) to perform the Depth-First Search starting from nodeA.

Dijkstra's Algorithm

Dijkstra's algorithm is a shortest path algorithm that finds the shortest path between two nodes in a weighted graph.
1function dijkstra(graph, startNode, endNode) {
2  const distances = {};
3  const visited = {};
4  const previous = {};
5  const pq = new PriorityQueue();
6
7  // Initialize distances with Infinity except for the start node, which has distance 0
8  for (const node in graph) {
9    distances[node] = Infinity;
10  }
11  distances[startNode] = 0;
12
13  // Add the start node to the priority queue
14  pq.enqueue(startNode, 0);
15
16  while (!pq.isEmpty()) {
17    const currentNode = pq.dequeue().element;
18
19    // If the current node is the end node, we have found the shortest path
20    if (currentNode === endNode) {
21      // Build the shortest path by traversing the previous object from end to start
22      const path = [];
23      let node = endNode;
24      while (node) {
25        path.push(node);
26        node = previous[node];
27      }
28      return path.reverse();
29    }
30
31    // If we have already visited this node, skip it
32    if (visited[currentNode]) {
33      continue;
34    }
35    visited[currentNode] = true;
36
37    // Relax the neighbors of the current node
38    for (const neighbor in graph[currentNode]) {
39      const distance = graph[currentNode][neighbor];
40      const tentativeDistance = distances[currentNode] + distance;
41
42      if (tentativeDistance < distances[neighbor]) {
43        distances[neighbor] = tentativeDistance;
44        previous[neighbor] = currentNode;
45        pq.enqueue(neighbor, distances[neighbor]);
46      }
47    }
48  }
49
50  // If we get here, there is no path from startNode to endNode
51  return null;
52}
53
54// Example usage
55const graph = {
56  A: { B: 2, C: 4 },
57  B: { D: 3 },
58  C: { D: 1, E: 6 },
59  D: { E: 2 },
60  E: {},
61};
62const shortestPath = dijkstra(graph, 'A', 'E');
63console.log(shortestPath); // Output: ['A', 'B', 'D', 'E']
64      
In this example, the dijkstra function takes in a graph object, startNode and endNode. The graph object is an adjacency list representing the weighted directed graph. The startNode and endNode are the nodes between which we want to find the shortest path.
The algorithm uses four objects to keep track of the distances, visited nodes, previous nodes and the priority queue. The distances object keeps track of the tentative distances for each node. The visited object keeps track of the nodes that have been visited. The previous object keeps track of the previous node in the shortest path to each node. The pq object is a priority queue that stores nodes according to their tentative distance.
The function starts by initializing the distances object with Infinity for all nodes except for the startNode, which is initialized to 0. It then adds the startNode to the priority queue.
The while loop continues until the priority queue is empty. Inside the loop, the function extracts

Bellman-Ford Algorithm

Bellman-Ford algorithm is another shortest path algorithm that can handle negative weight edges, unlike Dijkstr's algorithm.
1function bellmanFord(graph, start) {
2  // Step 1: Initialize distances and predecessor
3  let distances = {};
4  let predecessors = {};
5  for (let node in graph) {
6    distances[node] = Infinity;
7    predecessors[node] = null;
8  }
9  distances[start] = 0;
10
11  // Step 2: Relax edges repeatedly
12  for (let i = 0; i < Object.keys(graph).length - 1; i++) {
13    for (let u in graph) {
14      for (let v in graph[u]) {
15        let weight = graph[u][v];
16        if (distances[u] + weight < distances[v]) {
17          distances[v] = distances[u] + weight;
18          predecessors[v] = u;
19        }
20      }
21    }
22  }
23
24  // Step 3: Check for negative-weight cycles
25  for (let u in graph) {
26    for (let v in graph[u]) {
27      let weight = graph[u][v];
28      if (distances[u] + weight < distances[v]) {
29        throw new Error("Graph contains a negative-weight cycle");
30      }
31    }
32  }
33
34  return { distances, predecessors };
35}
36      
The bellmanFord function takes in a graph object and a starting node, and returns an object with the final distances and predecessors for each node in the graph. Here's an example usage with the following graph:
1
2A --5--> B --2--> C
3       |         ^  
4  1     3         |  
5       |         |  
6    v   v         |  
7     D --6------- E  
8      
1let graph = {
2  A: { B: 5, D: 1 },
3  B: { C: 2, E: 3 },
4  C: { E: 1 },
5  D: { B: 3, E: 6 },
6  E: {}
7};
8let start = "A";
9
10let result = bellmanFord(graph, start);
11console.log(result.distances); // { A: 0, B: 5, C: 7, D: 1, E: 8 }
12console.log(result.predecessors); // { A: null, B: 'A', C: 'B', D: 'A', E: 'B' }
13      
In the example, the bellmanFord function returns the shortest distances and predecessors for each node in the graph, starting from node A.

Prim's Algorithm

Prim's algorithm is a minimum spanning tree algorithm that finds the minimum spanning tree for a connected, weighted graph.
1function prim(graph) {
2  const n = graph.length;
3  const parent = new Array(n).fill(-1);
4  const key = new Array(n).fill(Infinity);
5  const visited = new Array(n).fill(false);
6
7  key[0] = 0; // Set the key of the first vertex to 0
8
9  for (let i = 0; i < n - 1; i++) {
10    let minKey = Infinity;
11    let minIndex = -1;
12
13    // Find the vertex with the minimum key value among the vertices
14    // that haven't been visited yet
15    for (let j = 0; j < n; j++) {
16      if (!visited[j] && key[j] < minKey) {
17        minKey = key[j];
18        minIndex = j;
19      }
20    }
21
22    visited[minIndex] = true;
23
24    // Update the key values of adjacent vertices that haven't been visited yet
25    for (let j = 0; j < n; j++) {
26      const weight = graph[minIndex][j];
27      if (weight > 0 && !visited[j] && weight < key[j]) {
28        parent[j] = minIndex;
29        key[j] = weight;
30      }
31    }
32  }
33
34  return parent; // Return the array of parent vertices
35}
36
37// Example usage:
38const graph = [
39  [0, 2, 0, 6, 0],
40  [2, 0, 3, 8, 5],
41  [0, 3, 0, 0, 7],
42  [6, 8, 0, 0, 9],
43  [0, 5, 7, 9, 0]
44];
45
46const parent = prim(graph);
47console.log(parent); // Output: [-1, 0, 1, 0, 1]
48      
This implementation of Prim's Algorithm takes an adjacency matrix graph as input and returns an array of parent vertices. The n variable is the number of vertices in the graph. The algorithm initializes arrays for the parent vertices, the key values (which are initially set to infinity for all vertices except the first one, which is set to 0), and a boolean array to keep track of which vertices have been visited.
The algorithm then loops n-1 times, finding the vertex with the minimum key value among the vertices that haven't been visited yet. It marks that vertex as visited and updates the key values of its adjacent vertices that haven't been visited yet. If the weight of an edge is less than the current key value of a vertex, the parent of that vertex is updated to the current vertex. Finally, the array of parent vertices is returned.

Kruskal's Algorithm

Kruskal's algorithm is another minimum spanning tree algorithm that finds the minimum spanning tree by building it up one edge at a time.
1// Define the graph using an adjacency list
2const graph = {
3  'A': {'B': 7, 'D': 5},
4  'B': {'A': 7, 'C': 8, 'D': 9, 'E': 7},
5  'C': {'B': 8, 'E': 5},
6  'D': {'A': 5, 'B': 9, 'E': 15, 'F': 6},
7  'E': {'B': 7, 'C': 5, 'D': 15, 'F': 8, 'G': 9},
8  'F': {'D': 6, 'E': 8, 'G': 11},
9  'G': {'E': 9, 'F': 11}
10};
11
12// Define a function to find the parent of a vertex in a union-find data structure
13function find(parent, i) {
14  if (parent[i] === i) {
15    return i;
16  }
17  return find(parent, parent[i]);
18}
19
20// Define a function to perform union of two vertices in a union-find data structure
21function union(parent, rank, x, y) {
22  const xRoot = find(parent, x);
23  const yRoot = find(parent, y);
24  if (rank[xRoot] < rank[yRoot]) {
25    parent[xRoot] = yRoot;
26  } else if (rank[xRoot] > rank[yRoot]) {
27    parent[yRoot] = xRoot;
28  } else {
29    parent[yRoot] = xRoot;
30    rank[xRoot]++;
31  }
32}
33
34// Define Kruskal's algorithm function
35function kruskal(graph) {
36  const vertices = Object.keys(graph);
37  const parent = {};
38  const rank = {};
39  let edges = [];
40  let result = [];
41  
42  // Initialize the parent and rank arrays for union-find data structure
43  vertices.forEach(vertex => {
44    parent[vertex] = vertex;
45    rank[vertex] = 0;
46  });
47
48  // Create a list of edges from the graph
49  for (const vertex in graph) {
50    for (const neighbor in graph[vertex]) {
51      edges.push([vertex, neighbor, graph[vertex][neighbor]]);
52    }
53  }
54
55  // Sort the edges in increasing order of their weight
56  edges.sort((a, b) => a[2] - b[2]);
57
58  // Iterate over each edge and add it to the minimum spanning tree if it does not create a cycle
59  edges.forEach(edge => {
60    const [vertex1, vertex2, weight] = edge;
61    const root1 = find(parent, vertex1);
62    const root2 = find(parent, vertex2);
63    if (root1 !== root2) {
64      result.push(edge);
65      union(parent, rank, root1, root2);
66    }
67  });
68
69  return result;
70}
71
72// Test Kruskal's algorithm function with the example graph
73const minimumSpanningTree = kruskal
74      
In this example, the kruskal function takes a graph in the form of an adjacency list as input and returns the edges of the minimum spanning tree in the form of an array. The find function and union function are used to implement the union-find data structure, which is used to check whether adding an edge to the minimum spanning tree will create a cycle.

Floyd-Warshall Algorithm

Floyd-Warshall algorithm is an all-pairs shortest path algorithm that finds the shortest path between all pairs of nodes in a graph.
1function floydWarshall(graph) {
2  const dist = [];
3  const n = graph.length;
4
5  // Initialize the distance matrix with the values from the input graph
6  for (let i = 0; i < n; i++) {
7    dist[i] = [];
8    for (let j = 0; j < n; j++) {
9      dist[i][j] = graph[i][j];
10    }
11  }
12
13  // Calculate the shortest paths between all pairs of vertices
14  for (let k = 0; k < n; k++) {
15    for (let i = 0; i < n; i++) {
16      for (let j = 0; j < n; j++) {
17        if (dist[i][k] !== Infinity && dist[k][j] !== Infinity && dist[i][k] + dist[k][j] < dist[i][j]) {
18          dist[i][j] = dist[i][k] + dist[k][j];
19        }
20      }
21    }
22  }
23
24  return dist;
25}
26      
The graph input parameter is a two-dimensional array representing the adjacency matrix of the graph. Each element graph[i][j] of the matrix represents the weight of the edge connecting vertex i to vertex j, or Infinity if there is no edge between the vertices.
The function returns a two-dimensional array dist representing the shortest distances between all pairs of vertices in the graph. Element dist[i][j] contains the length of the shortest path from vertex i to vertex j.

Dynamic Programming

Dynamic programming is a technique used to solve problems by breaking them down into smaller, simpler subproblems. The solutions to these subproblems are stored in memory and used to solve larger, more complex problems. It is an optimization technique that can be used to solve a wide variety of problems, such as finding the shortest path between two points, or the longest common subsequence of two strings.
There are many different types of dynamic programming algorithms, each with its own strengths and weaknesses, and each designed to solve specific problems.
In general, a dynamic programming algorithm is a way of breaking a problem down into smaller subproblems and storing the solutions to those subproblems in memory.

How Dynamic Programming Works?

Dynamic programming works by solving a problem recursively, but with a twist: it stores the solutions to the subproblems in memory, so that they can be reused when solving larger problems. This means that each subproblem only needs to be solved once, rather than multiple times. Dynamic programming algorithms typically follow a similar structure:
1. Define the problem as a series of subproblems
2. Identify the base case(s)
3. Define a recurrence relation that describes the solution to each subproblem in terms of the solutions to smaller subproblems
4. Use memoization or tabulation to store the solutions to the subproblems in memory
5. Use the solutions to the subproblems to solve the larger problem
Here are some examples of problems that can be solved using dynamic programming:

Fibonacci Sequence

The Fibonacci sequence is a series of numbers where each number is the sum of the two preceding numbers. The nth number in the Fibonacci sequence can be calculated using dynamic programming.
1function fibonacci(n) {
2  let fib = [0, 1];
3  for (let i = 2; i <= n; i++) {
4    fib[i] = fib[i-1] + fib[i-2];
5  }
6  return fib.slice(0, n+1);
7}
8
9// Example usage:
10console.log(fibonacci(10)); // Outputs [0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55]
In this implementation, we start with an array fib that contains the first two numbers in the sequence (0 and 1). We then use a for loop to generate the rest of the sequence up to the given number n. Each element fib[i] is calculated as the sum of the two preceding elements fib[i-1] and fib[i-2]. Finally, we return the subarray of fib that contains the sequence up to n.

Longest Common Subsequence

The longest common subsequence problem is to find the longest subsequence that is common to two sequences. This problem can be solved using dynamic programming by breaking it down into smaller subproblems.
1function lcs(str1, str2) {
2  const m = str1.length;
3  const n = str2.length;
4  const lcsTable = Array.from(Array(m + 1), () => new Array(n + 1).fill(0));
5
6  for (let i = 1; i <= m; i++) {
7    for (let j = 1; j <= n; j++) {
8      if (str1[i - 1] === str2[j - 1]) {
9        lcsTable[i][j] = lcsTable[i - 1][j - 1] + 1;
10      } else {
11        lcsTable[i][j] = Math.max(lcsTable[i][j - 1], lcsTable[i - 1][j]);
12      }
13    }
14  }
15
16  let lcs = "";
17  let i = m, j = n;
18  while (i > 0 && j > 0) {
19    if (str1[i - 1] === str2[j - 1]) {
20      lcs = str1[i - 1] + lcs;
21      i--;
22      j--;
23    } else if (lcsTable[i - 1][j] > lcsTable[i][j - 1]) {
24      i--;
25    } else {
26      j--;
27    }
28  }
29  return lcs;
30}
The lcs function takes two string arguments str1 and str2 and returns the longest common subsequence between them. It uses a two-dimensional array lcsTable to store the length of LCS of the prefixes of str1 and str2. The last element in lcsTable will store the length of the LCS of the full str1 and str2.
The function then constructs the LCS by starting from the bottom-right cell of lcsTable and traversing back to the top-left cell, following the path with the maximum length.
Note that this implementation has a time complexity of O(m * n) and a space complexity of O(m * n), where m and n are the lengths of the input strings.

Knapsack Problem

The knapsack problem is to find the maximum value of items that can be put into a knapsack of a given size. This problem can be solved using dynamic programming by breaking it down into smaller subproblems.
1function knapsack(items, capacity) {
2  const n = items.length;
3  const dp = Array.from(Array(n + 1), () => Array(capacity + 1).fill(0));
4 
5  for (let i = 1; i <= n; i++) {
6    for (let j = 0; j <= capacity; j++) {
7      if (items[i-1].weight <= j) {
8        dp[i][j] = Math.max(dp[i-1][j], dp[i-1][j - items[i-1].weight] + items[i-1].value);
9      } else {
10        dp[i][j] = dp[i-1][j];
11      }
12    }
13  }
14 
15  const result = [];
16  let i = n;
17  let j = capacity;
18 
19  while (i > 0 && j > 0) {
20    if (dp[i][j] !== dp[i-1][j]) {
21      result.push(items[i-1]);
22      j -= items[i-1].weight;
23    }
24    i--;
25  }
26 
27  return result.reverse();
28}
29 
30const items = [
31  { name: "item1", weight: 2, value: 5 },
32  { name: "item2", weight: 3, value: 6 },
33  { name: "item3", weight: 4, value: 7 },
34  { name: "item4", weight: 5, value: 8 },
35  { name: "item5", weight: 6, value: 9 }
36];
37const capacity = 10;
38 
39const result = knapsack(items, capacity);
40console.log(result); // [{ name: "item4", weight: 5, value: 8 }, { name: "item2", weight: 3, value: 6 }]

Longest Increasing Subsequence

The longest increasing subsequence problem is to find the longest subsequence of a given sequence in which the subsequence's elements are in sorted order, lowest to highest, and in which the subsequence is as long as possible.
In this implementation, we use a two-dimensional array dp to store the maximum value that can be achieved for each subproblem of considering the first i items and having a capacity of j. We compute the values of dp in a bottom-up manner by iterating over the items and capacities. Finally, we reconstruct the solution by tracing back the items that were selected to achieve the maximum value.

Matrix Chain Multiplication

The matrix chain multiplication problem is to find the most efficient way to multiply a given sequence of matrices. This problem can be solved using dynamic programming by breaking it down into smaller subproblems.

0-1 Knapsack Problem

The 0-1 knapsack problem is to find the maximum value of items that can be put into a knapsack of a given size. This problem can be solved using dynamic programming by breaking it down into smaller subproblems.

Shortest Common Supersequence

The shortest common supersequence problem is to find the shortest supersequence that is common to two sequences. This problem can be solved using dynamic programming by breaking it down into smaller subproblems.

Longest Palindromic Subsequence

The longest palindromic subsequence problem is to find the longest subsequence of a given sequence that is also a palindrome. This problem can be solved using dynamic programming by breaking it down into smaller subproblems.

Longest Repeating Subsequence

The longest repeating subsequence problem is to find the longest subsequence that is repeated in a given sequence. This problem can be solved using dynamic programming by breaking it down into smaller subproblems.

Longest Palindromic Substring

The longest palindromic substring problem is to find the longest substring of a given sequence that is also a palindrome. This problem can be solved using dynamic programming by breaking it down into smaller subproblems.

Word Break Problem

The word break problem is to find out if a given sequence of characters can be segmented into a space-separated sequence of one or more dictionary words. This problem can be solved using dynamic programming by breaking it down into smaller subproblems.

Subset Sum Problem

The subset sum problem is to find out if there is a subset of a given set with sum equal to a given number. This problem can be solved using dynamic programming by breaking it down into smaller subproblems.

Minimum Partition

The minimum partition problem is to find the minimum difference between the sum of two subsets of a given set. This problem can be solved using dynamic programming by breaking it down into smaller subproblems.

Longest Bitonic Subsequence

The longest bitonic subsequence problem is to find the longest bitonic subsequence of a given sequence. This problem can be solved using dynamic programming by breaking it down into smaller subproblems.

Count of subset sum with a given sum

The count of subset sum problem is to find the number of subsets of a given set with sum equal to a given number. This problem can be solved using dynamic programming by breaking it down into smaller subproblems.

Longest Common Substring

The longest common substring problem is to find the longest substring that is common to two strings. This problem can be solved using dynamic programming by breaking it down into smaller subproblems.

Maximum Sum Increasing Subsequence

The maximum sum increasing subsequence problem is to find the maximum sum of an increasing subsequence of a given sequence. This problem can be solved using dynamic programming by breaking it down into smaller subproblems.

Longest Alternating Subsequence

The longest alternating subsequence problem is to find the longest subsequence of a given sequence such that all elements of the subsequence are alternating. This problem can be solved using dynamic programming by breaking it down into smaller subproblems.

References

Optimization Algorithms

Optimization algorithms are used to find the optimal solution to a problem. These algorithms are commonly used in various fields such as mathematics, engineering, physics, and computer science to solve problems that involve maximizing or minimizing an objective function subject to certain constraints.
There are many different types of optimization algorithms, each with its own strengths and weaknesses, and each designed to solve specific problems.
In general, an optimization algorithm works by iteratively improving the solution until it converges to the optimal solution. The algorithm starts with an initial solution, and then makes small adjustments to the solution to try and improve it. The adjustments are based on the gradient of the objective function, which tells the algorithm in which direction the solution should be adjusted to improve it.

How Optimization Algorithms Work?

Optimization algorithms typically follow a similar structure:
1. Define the problem as an objective function and constraints 2. Choose an initial solution
2. Choose an initial solution
3. Calculate the gradient of the objective function
4. Adjust the solution in the direction of the gradient to improve it
5. Repeat steps 3-4 until the solution converges to the optimal solution
Here are some examples of problems that can be solved using optimization algorithms:

Linear Programming

Linear programming is a technique used to find the optimal solution to a problem that involves linear constraints. This problem can be solved using optimization algorithms such as the simplex method or the interior-point method.

Nonlinear Programming

Nonlinear programming is a technique used to find the optimal solution to a problem that involves nonlinear constraints. This problem can be solved using optimization algorithms such as the gradient descent method or the Newton method.

Integer Programming

Integer programming is a technique used to find the optimal solution to a problem that involves integer constraints. This problem can be solved using optimization algorithms such as the branch and bound method or the cutting plane method.

Convex Optimization

Convex optimization is a technique used to find the optimal solution to a problem that involves convex constraints. This problem can be solved using optimization algorithms such as the gradient descent method or the Newton method.

Quadratic Programming

Quadratic programming is a technique used to find the optimal solution to a problem that involves quadratic constraints. This problem can be solved using optimization algorithms such as the gradient descent method or the Newton method.

Nonlinear Least Squares

Nonlinear least squares is a technique used to find the optimal solution to a problem that involves nonlinear least squares constraints. This problem can be solved using optimization algorithms such as the gradient descent method or the Newton method.