Data Structures Quick Summary
Constant time algorithms have a runtime that does not depend on the input size. Linear
time algorithms have a runtime proportional to the input size, and quadratic time algorithms
have a runtime proportional to the square of the input size. The binary search algorithm is
an example of an algorithm with logarithmic time complexity.\n\n The text then moves on to
discuss arrays, which are a fundamental building block for most data structures. Arrays are
fixed-length containers with indexable elements, usually ranging from 0 to n-1. They are
given as contiguous chunks of memory and are used to store temporary objects, act as
buffers, lookup tables, or as a workaround in a programming language that allows only one
return value. Arrays can be searched, but it may take up to linear time to traverse all
elements. Static arrays cannot grow or shrink in size, while dynamic arrays can. \n\nThe text
concludes by explaining the basic structure of an array, the common operations that can be
performed on them, and some complexity analysis. It also provides an example of
implementing a dynamic array using static arrays.
I am explaining how to create a dynamic array using a stack with an initial capacity of 90.
As elements are added, they are added to the underlying static array, keeping track of the
number of elements added. If an element exceeds the capacity of the internal static array,
the array size is doubled, and all elements are copied into the new array, along with the new
element. The dynamic array source code is provided, which includes constructors, methods
to get and set the size of the array, to clear the array, and to add and remove elements. The
Add method resizes the array when the length plus one is greater than or equal to the
capacity. The remove and removeAt methods remove a particular value at a given index, and
the contains method checks if the value is present in the array. An iterator is also included
to iterate over the array, and a to string method provides a string representation of the
array. \n\nIn the second part, I explain what linked lists are and where they are used. Linked
lists are sequential lists of nodes that hold data, which point to other nodes also containing
data. They are used in abstract data type implementation of lists, stacks, and queues
because of their great time complexity for adding and removing elements. Linked lists can
also be used to model real-world objects such as a line of train carts. \n\nTerminology
concerning linked lists is also discussed, including the head and tail of the list, the nodes,
and the pointers or references which point to the next node. There are two types of linked
lists, singly linked and doubly linked. Singly linked lists only contain a pointer to the next
node, while doubly linked lists contain a pointer to the previous node as well. The pros and
cons of using singly and doubly linked lists are discussed, including the fact that singly
linked lists use less memory but cannot access previous elements, while doubly linked lists
can access previous elements but use more memory.
Doubly linked lists allow for easier traversal backwards and removal in constant time,
but use twice as much memory as singly linked lists. Inserting and removing elements
involve seeking to the position in the list and changing the appropriate pointers. Singly
linked lists require two pointers to remove an element, while doubly linked lists only require
one. Searching for an element in a linked list takes linear time in the worst case. Adding or
removing elements at the head or tail is done in constant time, but removing from the tail of
Constant time algorithms have a runtime that does not depend on the input size. Linear
time algorithms have a runtime proportional to the input size, and quadratic time algorithms
have a runtime proportional to the square of the input size. The binary search algorithm is
an example of an algorithm with logarithmic time complexity.\n\n The text then moves on to
discuss arrays, which are a fundamental building block for most data structures. Arrays are
fixed-length containers with indexable elements, usually ranging from 0 to n-1. They are
given as contiguous chunks of memory and are used to store temporary objects, act as
buffers, lookup tables, or as a workaround in a programming language that allows only one
return value. Arrays can be searched, but it may take up to linear time to traverse all
elements. Static arrays cannot grow or shrink in size, while dynamic arrays can. \n\nThe text
concludes by explaining the basic structure of an array, the common operations that can be
performed on them, and some complexity analysis. It also provides an example of
implementing a dynamic array using static arrays.
I am explaining how to create a dynamic array using a stack with an initial capacity of 90.
As elements are added, they are added to the underlying static array, keeping track of the
number of elements added. If an element exceeds the capacity of the internal static array,
the array size is doubled, and all elements are copied into the new array, along with the new
element. The dynamic array source code is provided, which includes constructors, methods
to get and set the size of the array, to clear the array, and to add and remove elements. The
Add method resizes the array when the length plus one is greater than or equal to the
capacity. The remove and removeAt methods remove a particular value at a given index, and
the contains method checks if the value is present in the array. An iterator is also included
to iterate over the array, and a to string method provides a string representation of the
array. \n\nIn the second part, I explain what linked lists are and where they are used. Linked
lists are sequential lists of nodes that hold data, which point to other nodes also containing
data. They are used in abstract data type implementation of lists, stacks, and queues
because of their great time complexity for adding and removing elements. Linked lists can
also be used to model real-world objects such as a line of train carts. \n\nTerminology
concerning linked lists is also discussed, including the head and tail of the list, the nodes,
and the pointers or references which point to the next node. There are two types of linked
lists, singly linked and doubly linked. Singly linked lists only contain a pointer to the next
node, while doubly linked lists contain a pointer to the previous node as well. The pros and
cons of using singly and doubly linked lists are discussed, including the fact that singly
linked lists use less memory but cannot access previous elements, while doubly linked lists
can access previous elements but use more memory.
Doubly linked lists allow for easier traversal backwards and removal in constant time,
but use twice as much memory as singly linked lists. Inserting and removing elements
involve seeking to the position in the list and changing the appropriate pointers. Singly
linked lists require two pointers to remove an element, while doubly linked lists only require
one. Searching for an element in a linked list takes linear time in the worst case. Adding or
removing elements at the head or tail is done in constant time, but removing from the tail of