Today we learned about hash tables and how they are an efficient way to look up a list of items (their worst case complexities, if done ideally, is O(1)). Apparently, an application of hash tables are used in Python, called dictionaries. Perhaps that's why it's so fast to access a value with its key... However, it is almost impossible to not have a collision when storing data in hash tables, so we learned that Python uses both probing and chaining to handle to minimize such collisions. Both these techniques have setbacks of their own (trying to store more items can lead to worst case complexities of O(n) or even O(n-squared), which defeats the purpose of having hash tables!). I can't see how I'll be tested on this topic since we only went over this topic conceptually, but I never had trouble understanding this topic and was even piqued by this aspect of the course.
There isn't much I can talk about in the labs since we only looked at running times and algorithms of different sorting techniques. For most of the sorting algorithms we looked at, their worst case complexities are either n log n or n-squared. However, each sorting algorithm's run times might be faster or slower than each other depending on whether the list is sorted, unsorted, and/or reversed. Analyzing these best, average, and worst time complexities through their code irked me a little bit because I really had to understand how each sorting algorithm actually worked before I did that. I'll look over the code before the final.
That's it for me. It's been one fun SLOG. Hope my finals go well.