The final exam will be held on Thursday, May 18th, from 5-8pm. We
have been split into several rooms, so please go to the room marked
below, based on the first letter of your last name.
A-J: 306 Soda Hall
K-O: 405 Soda Hall
P-S: 310 Soda Hall
T-Z: 380 Soda Hall
If you have notified the course staff of a conflict with the final
exam time, you should have received an email confirming our awareness
of the conflict. If not, please email cs188-staff@lists
as soon as possible.
The exam will be open-notes and open-book. You may use simple
calculators, but not any laptops or networked devices. The
exam is designed to not require books or notes. A calculator is not required. but may be helpful.
You can also look at old
exams from previous semesters.
Changes to the practice final solutions appear at the bottom of this page.
Review Sessions and Office Hours
Special TA office hour schedule:
Monday: 11am-12pm (John in 551)
Tuesday: 12:30pm-4:30pm (John, then Aria in 511)
Thursday: 1:00pm-5:00pm (Arlo (1-5 in 511), Aria (2:30-4:00 in 711))
Tuesday's office hours will be structured around the practice final. We will follow roughly this schedule for Tuesday:
Search (#2) @ 12:30
Probabilistic Reasoning (#4) @ 1:15
Tree Search (#5) @ 2:00
Hidden Markov Models (#7) @ 2:30
MDPs and Reinforcement Learning (#8) @ 3:30
There will, of course, be plenty of time for general questions as well. What about questions #1, #2 and #6? Those were (mostly) solved during the review session on Sunday. If you have more questions about them, feel free to ask.
Thursday's office hours will be more free-form.
The main review session was held on Sunday, May 14.
Possible Exam Topics
Search:
BFS, DFS, UCS, A*, Greedy search
Search algorithms' strengths and weaknesses
Properties: completeness, optimality
Be able to phrase search problems and create heuristics
Basics of parameter estimation (maximum likelihood, smoothing)
Inference with variable elimination
Be able to draw an appropriate BN for a domain
Hidden Markov Models:
Markov and hidden Markov chains
Forward algorithm
Viterbi algorithm
Decisions:
Utilities, rationality, MEU
Markov decision processes
Reward functions
Bellman Equations
Value and policy iteration
Be able to phrase an MDP
Reinforcement Learning:
Exploration vs exploitation
TD value learning / Q-learning
Linear value function approximation
Games:
Minimax search and variants (expectimax, etc.)
Alpha-beta pruning
Basic game theory (identify pure Nash equilibria, etc.)
Applications:
You should be able to show that you have a sense of what the
problems and solutions are, but you will not be responsible for the
technical details of the last few lectures
You should be able to give reasonable answers about how to
appropriately apply techniques we've covered to applications we did
not discuss
Changes to the practice final solutions
Answers to the following questions have been updated since copies were printed for the review session.