Jekyll2022-04-23T01:12:41+00:00https://evandez.com/feed.xmlEvan HernandezEvan HernandezLanguage Explanations of Neurons2022-04-23T00:00:00+00:002022-04-23T00:00:00+00:00https://evandez.com/2022/04/23/proj-milan<p>We present a procedure to automatically generate natural language descriptions
of neurons in computer vision models. These generated descriptions support
important interpretability applications: we use them to analyze neuron importance,
identify adversarial vulnerabilities, audit for unexpected features,
and edit out spurious correlations.</p>Evan HernandezWe present a procedure to automatically generate natural language descriptions of neurons in computer vision models. These generated descriptions support important interpretability applications: we use them to analyze neuron importance, identify adversarial vulnerabilities, audit for unexpected features, and edit out spurious correlations.MIT Summer Research Program2021-06-01T00:00:00+00:002021-06-01T00:00:00+00:00https://evandez.com/2021/06/01/teach-msrp<p>I had the pleasure of mentoring an MSRP summer intern on a research project. She developed a language-based image editing tool for images generated by GANs.</p>Evan HernandezI had the pleasure of mentoring an MSRP summer intern on a research project. She developed a language-based image editing tool for images generated by GANs.Low-Dimensional Probing2021-01-01T00:00:00+00:002021-01-01T00:00:00+00:00https://evandez.com/2021/01/01/proj-low-dim-probes<p>How do word representations geometrically encode linguistic abstractions like part of speech? We find that many linguistic features are encoded in <b>low-dimensional subspaces</b> of contextual word representation spaces, and these subspaces can causally influence model predictions.</p>Evan HernandezHow do word representations geometrically encode linguistic abstractions like part of speech? We find that many linguistic features are encoded in low-dimensional subspaces of contextual word representation spaces, and these subspaces can causally influence model predictions.Visual Concept Vocabulary for GANs2021-01-01T00:00:00+00:002021-01-01T00:00:00+00:00https://evandez.com/2021/01/01/proj-visual-vocab<p>GANs sometimes encode visual concepts in their latent space as <b>linear directions</b>.
We construct a <b>visual concept vocabulary</b> for pretrained GANs, consisting of latent directions
and free-form language descriptions of the changes they induce. We then distil the vocabulary into simpler,
one-word visual concepts (e.g., <i>snow</i> or <i>clouds</i>).</p>Sarah SchwettmannGANs sometimes encode visual concepts in their latent space as linear directions. We construct a visual concept vocabulary for pretrained GANs, consisting of latent directions and free-form language descriptions of the changes they induce. We then distil the vocabulary into simpler, one-word visual concepts (e.g., snow or clouds).6.864: Advanced Natural Language Processing2021-01-01T00:00:00+00:002021-01-01T00:00:00+00:00https://evandez.com/2021/01/01/teach-advanced-nlp<p>MIT’s primary NLP course, typically taken after a first course in ML. I wrote homework assignments, planned recitations, and led weekly office hours.</p>Evan HernandezMIT’s primary NLP course, typically taken after a first course in ML. I wrote homework assignments, planned recitations, and led weekly office hours.Undergraduate Learning Center2018-01-01T00:00:00+00:002018-01-01T00:00:00+00:00https://evandez.com/2018/01/01/teach-ulc<p>For three years, I tutored underrepresented students in UW-Madison engineering programs on introductory computer science and math classes. I also developed tutoring software to support the tutoring by request and drop-in tutoring services.</p>Evan HernandezFor three years, I tutored underrepresented students in UW-Madison engineering programs on introductory computer science and math classes. I also developed tutoring software to support the tutoring by request and drop-in tutoring services.