Section outline

  • May 19th, Monday (16:30-18:30)

    Question answering

    • Machine reading based on contextual embeddings
    • Start and end probabilities
    • Candidate score and fine-tuning loss
    • Negative examples and sliding windows
    • Machine reading based on attention: Stanford attentive reader
    • Bilinear product attention
    • Practical issues
    • Research papers
    • Datasets and leaderboards
    • Answer sentence selection
    • Knowledge-based QA
    • Entity linking

    References

    • Eisenstein, section 17.5.2
    • Slides from the lecture

    Resources