In the two “recovering” student speech videos, the speakers illustrate how to…

Natural language processing (NLP) is a crucial part of artificial intelligence (AI), modeling how people share information. In recent years, deep learning approaches have obtained very high performance on many NLP tasks. In this course, students gain a thorough introduction to cutting-edge neural networks for NLP.

Content

What is this course about?

Natural language processing (NLP) or computational linguistics is one of the most important technologies of the information age. Applications of NLP are everywhere because people communicate almost everything in language: web search, advertising, emails, customer service, language translation, virtual agents, medical reports, politics, etc. In the last decade, deep learning (or neural network) approaches have obtained very high performance across many different NLP tasks, using single end-to-end neural models that do not require traditional, task-specific feature engineering. In this course, students will gain a thorough introduction to cutting-edge research in Deep Learning for NLP. Through lectures, assignments and a final project, students will learn the necessary skills to design, implement, and understand their own neural network models, using the Pytorch framework.

“Take it. CS221 taught me algorithms. CS229 taught me math. CS224N taught me how to write machine learning models.” – A CS224N student on Carta

Previous offerings

Below you can find archived websites and student project reports from previous years.

Prerequisites

  • Proficiency in Python

    All class assignments will be in Python (using NumPy and PyTorch). If you need to remind yourself of Python, or you're not very familiar with NumPy, you can come to the Python review session in week 1 (listed in the schedule). If you have a lot of programming experience but in a different language (e.g. C/C++/Matlab/Java/Javascript), you will probably be fine.

  • College Calculus, Linear Algebra (e.g. MATH 51, CME 100)

    You should be comfortable taking (multivariable) derivatives and understanding matrix/vector notation and operations.

  • Basic Probability and Statistics (e.g. CS 109 or equivalent)

    You should know the basics of probabilities, gaussian distributions, mean, standard deviation, etc.

  • Foundations of Machine Learning (e.g. CS221, CS229, CS230, or CS124)

    We will be formulating cost functions, taking derivatives and performing optimization with gradient descent. If you already have basic machine learning and/or deep learning knowledge, the course will be easier; however it is possible to take CS224n without it. There are many introductions to ML, in webpage, book, and video form. One approachable introduction is Hal Daumé’s in-progress A Course in Machine Learning. Reading the first 5 chapters of that book would be good background. Knowing the first 7 chapters would be even better!

Reference Texts

The following texts are useful, but none are required. All of them can be read free online.

  • Dan Jurafsky and James H. Martin. Speech and Language Processing (3rd ed. draft)
  • Jacob Eisenstein. Natural Language Processing
  • Yoav Goldberg. A Primer on Neural Network Models for Natural Language Processing
  • Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning
  • Delip Rao and Brian McMahan. Natural Language Processing with PyTorch (requires Stanford login).

If you have no background in neural networks but would like to take the course anyway, you might well find one of these books helpful to give you more background:

  • Michael A. Nielsen. Neural Networks and Deep Learning
  • Eugene Charniak. Introduction to Deep Learning


Coursework

Assignments (54%)

There are five weekly assignments, which will improve both your theoretical understanding and your practical skills. All assignments contain both written questions and programming parts. In office hours, TAs may look at students’ code for assignments 1, 2 and 3 but not for assignments 4 and 5.

  • Credit:
    • Assignment 1 (6%): Introduction to word vectors
    • Assignment 2 (12%): Derivatives and implementation of word2vec algorithm
    • Assignment 3 (12%): Dependency parsing and neural network foundations
    • Assignment 4 (12%): Neural Machine Translation with sequence-to-sequence, attention, and subwords
    • Assignment 5 (12%): Self-supervised learning and fine-tuning with Transformers
  • Deadlines: All assignments are due on either a Tuesday or a Thursday before class (i.e. before 3:15pm). All deadlines are listed in the schedule.
  • Submission: Assignments are submitted via Gradescope. You will be able to access the course Gradescope page on Canvas. If you need to sign up for a Gradescope account, please use your @stanford.edu email address. Further instructions are given in each assignment handout. Do not email us your assignments.
  • Late start: If the result gives you a higher grade, we will not use your assignment 1 score, and we will give you an assignment grade based on counting each of assignments 2–5 at 13.5%.
  • Collaboration: Study groups are allowed, but students must understand and complete their own assignments, and hand in one assignment per student. If you worked in a group, please put the names of the members of your study group at the top of your assignment. Please ask if you have any questions about the collaboration policy.
  • Honor Code: We expect students to not look at solutions or implementations online. Like all other classes at Stanford, we take the student Honor Code seriously. We sometimes use automated methods to detect overly similar assignment solutions.

Final Project (43%)

The Final Project offers you the chance to apply your newly acquired skills towards an in-depth application. Students have two options: the Default Final Project (in which students tackle a predefined task, namely textual Question Answering) or a Custom Final Project (in which students choose their own project involving human language and deep learning). Examples of both can be seen on last year's website.

Important information

  • Credit: For both default and custom projects, credit for the final project is broken down as follows:
    • Project proposal (5%) [instructions]
    • Project milestone (5%) [instructions]
    • Project poster (3%)
    • Project report (30%) [instructions]
  • Deadlines: The project proposal, milestone and report are all due at 3:15pm. All deadlines are listed in the schedule.
  • Default Final Project [handout (IID SQuAD track)] [handout (Robust QA track)] [lecture slides]: In this project, students explore deep learning solutions to the SQuAD (Stanford Question Asking Dataset) challenge. This year's project is similar to last year's, on SQuAD 2.0 with baseline code in PyTorch.
  • Project advice [lecture slides] [custom project tips]: The Practical Tips for Final Projects lecture provides guidance for choosing and planning your project. To get project advice from staff members, first look at each staff member's areas of expertise on the office hours page. This should help you find a staff member who is knowledgable about your project area.

Practicalities

  • Team size: Students may do final projects solo, or in teams of up to 3 people. We strongly recommend you do the final project in a team. Larger teams are expected to do correspondingly larger projects, and you should only form a 3-person team if you are planning to do an ambitious project where every team member will have a significant contribution.
  • Contribution: In the final report we ask for a statement of what each team member contributed to the project. Team members will typically get the same grade, but we may differentiate in extreme cases of unequal contribution. You can contact us in confidence in the event of unequal contribution.
  • External collaborators: You can work on a project that has external (non CS224n student) collaborators, but you must make it clear in your final report which parts of the project were your work.
  • Sharing projects: You can share a single project between CS224n and another class, but we expect the project to be accordingly bigger, and you must declare that you are sharing the project in your project proposal.
  • Mentors: Every custom project team has a mentor, who gives feedback and advice during the project. Default project teams do not have mentors. A project may have an external (i.e., not course staff) mentor; otherwise, we will assign a CS224n staff mentor to custom project teams after project proposals.
  • Computing resources: All teams will receive credits to use the cloud computing service Azure, thanks to a kind donation by Microsoft!
  • Using external resources: The following guidelines apply to all projects (though the default project has some more specific rules, details TBA):
    • You can use any deep learning framework you like (PyTorch, TensorFlow, etc.)
    • More generally, you may use any existing code, libraries, etc. and consult any papers, books, online references, etc. for your project. However, you must cite your sources in your writeup and clearly indicate which parts of the project are your contribution and which parts were implemented by others.
    • Under no circumstances may you look at another CS224n group’s code, or incorporate their code into your project.

Participation (3%)

We appreciate everyone being actively involved in the class! There are several ways of earning participation credit, which is capped at 3%:

  • Attending guest speakers' lectures:
    • In the second half of the class, we have three invited speakers. Our guest speakers make a significant effort to come lecture for us, so (both to show our appreciation and to continue attracting interesting speakers) we do not want them lecturing to a largely empty room. As such, we encourage students to attend these virutal lectures live, and participate in Q&A.
    • All students get 0.5% per speaker (1.5% total) for either attending the guest lecture in person, or by writing a reaction paragraph if you watched the talk remotely; details will be provided. Students do not need to attend lecture live to write these reaction paragraphs; they may watch asynchronously.
  • Completing feedback surveys: We will send out two feedback surveys (mid-quarter and end-of-quarter) to help us understand how the course is going, and how we can improve. Each of the two surveys are worth 0.5%.
  • Ed participation: The top ~20 contributors to Ed will get 3%; others will get credit in proportion to the participation of the ~20th person.
  • Karma point: Any other act that improves the class, like helping out another student in office hours, which a CS224n TA or instructor notices and deems worthy: 1%

Late Days

  • Each student has 6 late days to use. A late day extends the deadline 24 hours. You can use up to 3 late days per assignment (including all five assignments, project proposal, project milestone and project final report).
  • Teams can share late days between members. For example, a group of three people must have at least six late days between them to extend the deadline by two days. If any late days are being shared, this must be clearly marked at the beginning of the report, and fill out the form linked in this Ed post.
  • Once you have used all 6 late days, the penalty is 1% off the final course grade for each additional late day.

Regrade Requests

If you feel you deserved a better grade on an assignment, you may submit a regrade request on Gradescope within 3 days after the grades are released. Your request should briefly summarize why you feel the original grade was unfair. Your TA will reevaluate your assignment as soon as possible, and then issue a decision. If you are still not happy, you can ask for your assignment to be regraded by an instructor.

Credit/No credit enrollment

If you take the class credit/no credit then you are graded in the same way as those registered for a letter grade. The only difference is that, providing you reach a C- standard in your work, it will simply be graded as CR.

All students welcome

We are committed to doing what we can to work for equity and to create an inclusive learning environment that actively values the diversity of backgrounds, identities, and experiences of everyone in CS224N. We also know that we will sometimes make missteps. If you notice some way that we could do better, we hope that you will let someone in the course staff know about it.

Well-Being and Mental Health

The last two years have been difficult for everyone. We’re here for you to try to help you get through a couple more quarters of the pandemic. If you are experiencing personal, academic, or relationship problems and would like to talk to someone with training and experience, reach out to the Counseling and Psychological Services (CAPS) on campus. CAPS is the university’s counseling center dedicated to student mental health and wellbeing. Phone assessment appointments can be made at CAPS by calling 650-723-3785, or by accessing the VadenPatient portal through the Vaden website.

Auditing the course

In general we are happy to have auditors if they are a member of the Stanford community (registered student, official visitor, staff, or faculty). If you want to actually master the material of the class, we very strongly recommend that auditors do all the assignments. However, due to high enrollment, we cannot grade the work of any students who are not officially enrolled in the class.

Students with Documented Disabilities

We assume that all of us learn in different ways, and that the organization of the course must accommodate each student differently. We are committed to ensuring the full participation of all enrolled students in this class. If you need an academic accommodation based on a disability, you should initiate the request with the Office of Accessible Education (OAE). The OAE will evaluate the request, recommend accommodations, and prepare a letter for faculty. Students should contact the OAE as soon as possible and at any rate in advance of assignment deadlines, since timely notice is needed to coordinate accommodations. Students should also send your accommodation letter to either the staff mailing list () or make a private post on Ed, as soon as possible.

Sexual violence

Academic accommodations are available for students who have experienced or are recovering from sexual violence. If you would like to talk to a confidential resource, you can schedule a meeting with the Confidential Support Team or call their 24/7 hotline at: 650-725-9955. Counseling and Psychological Services also offers confidential counseling services. Non-confidential resources include the Title IX Office, for investigation and accommodations, and the SARA Office, for healing programs. Students can also speak directly with the teaching staff to arrange accommodations. Note that university employees – including professors and TAs – are required to report what they know about incidents of sexual or relationship violence, stalking and sexual harassment to the Title IX Office. Students can learn more at https://vaden.stanford.edu/sexual-assault.


Schedule

Updated lecture slides will be posted here shortly before each lecture. Other links contain last year's slides, which are mostly similar.

Lecture notes will be uploaded a few days after most lectures. The notes (which cover approximately the first half of the course content) give supplementary detail beyond the lectures.

DateDescriptionCourse MaterialsEventsDeadlines
Tue Jan 4 Word Vectors
[slides] [notes]

Gensim word vectors example:
[code] [preview]

Suggested Readings:
  1. Efficient Estimation of Word Representations in Vector Space (original word2vec paper)
  2. Distributed Representations of Words and Phrases and their Compositionality (negative sampling paper)
Assignment 1 out
[code]
[preview]
Thu Jan 6 Word Vectors 2 and Word Window Classification
[slides] [notes]
Suggested Readings:
  1. GloVe: Global Vectors for Word Representation (original GloVe paper)
  2. Improving Distributional Similarity with Lessons Learned from Word Embeddings
  3. Evaluation methods for unsupervised word embeddings
Additional Readings:
  1. A Latent Variable Model Approach to PMI-based Word Embeddings
  2. Linear Algebraic Structure of Word Senses, with Applications to Polysemy
  3. On the Dimensionality of Word Embedding
Fri Jan 7 Python Review Session
[slides] [notebook]
1:30pm - 2:30pm
Remote (link on Canvas)
Tue Jan 11 Backprop and Neural Networks
[slides] [notes]
Suggested Readings:
  1. matrix calculus notes
  2. Review of differential calculus
  3. CS231n notes on network architectures
  4. CS231n notes on backprop
  5. Derivatives, Backpropagation, and Vectorization
  6. Learning Representations by Backpropagating Errors (seminal Rumelhart et al. backpropagation paper)
Additional Readings:
  1. Yes you should understand backprop
  2. Natural Language Processing (Almost) from Scratch
Assignment 2 out
[code]
[handout]
[latex template]
Assignment 1 due
Thu Jan 13 Dependency Parsing
[slides] [notes]
[slides (annotated)]
Suggested Readings:
  1. Incrementality in Deterministic Dependency Parsing
  2. A Fast and Accurate Dependency Parser using Neural Networks
  3. Dependency Parsing
  4. Globally Normalized Transition-Based Neural Networks
  5. Universal Stanford Dependencies: A cross-linguistic typology
  6. Universal Dependencies website
  7. Jurafsky & Martin Chapter 14
Fri Jan 14 PyTorch Tutorial Session
[colab notebook]
1:30pm - 2:30pm
Remote (link on Canvas)
Tue Jan 18 Recurrent Neural Networks and Language Models
[slides] [notes (lectures 5 and 6)]
Suggested Readings:
  1. N-gram Language Models (textbook chapter)
  2. The Unreasonable Effectiveness of Recurrent Neural Networks (blog post overview)
  3. Sequence Modeling: Recurrent and Recursive Neural Nets (Sections 10.1 and 10.2)
  4. On Chomsky and the Two Cultures of Statistical Learning
Assignment 3 out
[code]
[handout]
[latex template]
Assignment 2 due
Thu Jan 20 Vanishing Gradients, Fancy RNNs, Seq2Seq
[slides] [notes (lectures 5 and 6)]
Suggested Readings:
  1. Sequence Modeling: Recurrent and Recursive Neural Nets (Sections 10.3, 10.5, 10.7-10.12)
  2. Learning long-term dependencies with gradient descent is difficult (one of the original vanishing gradient papers)
  3. On the difficulty of training Recurrent Neural Networks (proof of vanishing gradient problem)
  4. Vanishing Gradients Jupyter Notebook (demo for feedforward networks)
  5. Understanding LSTM Networks (blog post overview)
Tue Jan 25 Machine Translation, Attention, Subword Models
[slides] [notes]
Suggested Readings:
  1. Statistical Machine Translation slides, CS224n 2015 (lectures 2/3/4)
  2. Statistical Machine Translation (book by Philipp Koehn)
  3. BLEU (original paper)
  4. Sequence to Sequence Learning with Neural Networks (original seq2seq NMT paper)
  5. Sequence Transduction with Recurrent Neural Networks (early seq2seq speech recognition paper)
  6. Neural Machine Translation by Jointly Learning to Align and Translate (original seq2seq+attention paper)
  7. Attention and Augmented Recurrent Neural Networks (blog post overview)
  8. Massive Exploration of Neural Machine Translation Architectures (practical advice for hyperparameter choices)
  9. Achieving Open Vocabulary Neural Machine Translation with Hybrid Word-Character Models
  10. Revisiting Character-Based Neural Machine Translation with Capacity and Compression
Assignment 4 out
[code]
[handout]
[latex template]
[Azure Guide]
[Practical Guide to VMs]
Assignment 3 due
Thu Jan 27 Final Projects: Custom and Default; Practical Tips
[slides] [Custom project tips]
Suggested Readings:
  1. Practical Methodology (Deep Learning book chapter)
Project Proposal out
[instructions]

Default Final Project out
[handout (IID SQuAD track)]
[handout (Robust QA track)]

Tue Feb 1 Transformers (by Anna Goldie)
[slides]
Suggested Readings:
  1. Project Handout (IID SQuAD track)
  2. Project Handout (Robust QA track)
  3. Attention Is All You Need
  4. The Illustrated Transformer
  5. Transformer (Google AI blog post)
  6. Layer Normalization
  7. Image Transformer
  8. Music Transformer: Generating music with long-term structure
Thu Feb 3 More about Transformers and Pretraining (by Anna Goldie)
[slides]
Suggested Readings:
  1. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
  2. Contextual Word Representations: A Contextual Introduction
  3. The Illustrated BERT, ELMo, and co.
  4. Martin & Jurafsky Chapter on Transfer Learning
Assignment 5 out
[code]
[handout]
[latex template]
Fri Feb 4 Hugging Face Transformers Tutorial Session 1:30pm - 2:30pm
Thornton 102 (will be recorded)
Colab
Tue Feb 8 Question Answering
[slides] [notes]
Suggested Readings:
  1. SQuAD: 100,000+ Questions for Machine Comprehension of Text
  2. Bidirectional Attention Flow for Machine Comprehension
  3. Reading Wikipedia to Answer Open-Domain Questions
  4. Latent Retrieval for Weakly Supervised Open Domain Question Answering
  5. Dense Passage Retrieval for Open-Domain Question Answering
  6. Learning Dense Representations of Phrases at Scale
Project Proposal due
Assignment 4 due
Thu Feb 10 Natural Language Generation
[slides]
Suggested readings:
  1. The Curious Case of Neural Text Degeneration
  2. Get To The Point: Summarization with Pointer-Generator Networks
  3. Hierarchical Neural Story Generation
  4. How NOT To Evaluate Your Dialogue System
Tue Feb 15 Integrating knowledge in language models [slides] Suggested readings:
  1. ERNIE: Enhanced Language Representation with Informative Entities
  2. Barack’s Wife Hillary: Using Knowledge Graphs for Fact-Aware Language Modeling
  3. Pretrained Encyclopedia: Weakly Supervised Knowledge-Pretrained Language Model
  4. Language Models as Knowledge Bases?
Project Milestone out
[Instructions]
Thu Feb 17 Bias, toxicity, and fairness
(by Maarten Sap)
Suggested readings:
  1. The Risk of Racial Bias in Hate Speech Detection
  2. Social Bias Frames
  3. PowerTransformer: Unsupervised Controllable Revision for Biased Language Correction
Assignment 5 due
Tue Feb 22 Retrieval Augmented Models + Knowledge
(by Kelvin Guu)
[slides]
Suggested readings:
  1. Locating and Editing Factual Knowledge in GPT
  2. LaMDA: Language Models for Dialog Applications
  3. REALM: Retrieval-Augmented Language Model Pre-Training
Thu Feb 24 ConvNets, Tree Recursive Neural Networks and Constituency Parsing
[slides]
Suggested readings:
  1. Convolutional Neural Networks for Sentence Classification
  2. Improving neural networks by preventing co-adaptation of feature detectors
  3. A Convolutional Neural Network for Modelling Sentences
  4. Parsing with Compositional Vector Grammars.
  5. Constituency Parsing with a Self-Attentive Encoder
Project Milestone due
Tue Mar 1 Scaling laws for large models
(by Jared Kaplan)
Suggested readings:
  1. Scaling Laws for Neural Language Models
Thu Mar 3 Coreference
[slides]
Suggested readings:
  1. Coreference Resolution Chapter from Jurafsky and Martin
  2. End-to-end Neural Coreference Resolution
Tue Mar 8 Editing Neural Networks
[slides]
Thu Mar 10 [Lecture Cancelled] Extra project office hours available during usual lecture time, see Ed.
Sun Mar 13 Project due [instructions]
Monday Mar 14 Poster Session Note: Only open to the Stanford community and invited guests.
12:00pm-4:30 [More details]
Alumni Center McCaw Hall/Ford Gardens
[Printing guide]

What strategies and techniques that the speaker used to make his speech more attention getting?

There are lots of ways to capture the attention of an audience, but here are a few of the most common:.
Relay an anecdote. Start by telling us a story that directly relates to your speech. ... .
Cite a startling fact or opinion. ... .
Ask a question. ... .
Use a quotation. ... .
Build suspense through narrative..

What can you suggest for the speaker to improve in the delivery of his speech?

The best public speakers maintain proper posture, make eye contact with the audience, and move in ways that look natural. Good body language improves your performance and helps the audience take in and remember what you say. Avoid putting your hands in your pockets or crossing your arms.

How can the speaker's better improve in delivering their speech enumerate 2 3 tips?

Here Are My 10 Tips for Public Speaking:.
Nervousness Is Normal. ... .
Know Your Audience. ... .
Organize Your Material in the Most Effective Manner to Attain Your Purpose. ... .
Watch for Feedback and Adapt to It. ... .
Let Your Personality Come Through. ... .
Use Humor, Tell Stories, and Use Effective Language. ... .
Don't Read Unless You Have to..

How should you approach public speaking if you want your voice to be lively and expressive?

How To Improve Vocal Expression.
Practice breathing techniques. Breathe from deep within your diaphragm, not just from your lungs. ... .
Practice speaking slower. It's common for people to speak faster when they're nervous. ... .
Practice taking a pause. This goes along with speaking slower. ... .
Practice working your pitch range..