<?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9" xmlns:image="http://www.google.com/schemas/sitemap-image/1.1" xmlns:xhtml="http://www.w3.org/1999/xhtml">
  <url>
    <loc>http://anilananthaswamy.com/why-machines-learn-codebook-blog</loc>
    <changefreq>daily</changefreq>
    <priority>0.75</priority>
    <lastmod>2025-04-24</lastmod>
  </url>
  <url>
    <loc>http://anilananthaswamy.com/why-machines-learn-codebook-blog/the-centrality-of-bayes-theorem</loc>
    <changefreq>monthly</changefreq>
    <priority>0.5</priority>
    <lastmod>2025-04-10</lastmod>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/b86a19f5-4009-4d28-889d-4ff14f2e40e0/Book+Small.jpeg</image:loc>
      <image:title>WHY MACHINES LEARN CODEBOOK BLOG - The Centrality of Bayes’s Theorem for Machine Learning - WHY MACHINES LEARN</image:title>
      <image:caption>“A masterpiece”— Geoffery Hinton</image:caption>
    </image:image>
  </url>
  <url>
    <loc>http://anilananthaswamy.com/why-machines-learn-codebook-blog/credit-assignment</loc>
    <changefreq>monthly</changefreq>
    <priority>0.5</priority>
    <lastmod>2025-03-28</lastmod>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/b86a19f5-4009-4d28-889d-4ff14f2e40e0/Book+Small.jpeg</image:loc>
      <image:title>WHY MACHINES LEARN CODEBOOK BLOG - Whose Fault Is it Anyway? The Problem of Credit Assignment - WHY MACHINES LEARN</image:title>
      <image:caption>“A masterpiece”—Geoff Hinton</image:caption>
    </image:image>
  </url>
  <url>
    <loc>http://anilananthaswamy.com/why-machines-learn-codebook-blog/the-long-arc-of-ml-history</loc>
    <changefreq>monthly</changefreq>
    <priority>0.5</priority>
    <lastmod>2025-03-27</lastmod>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/7b4c9d8d-5a17-49fe-bfcd-90f3e6abdfad/image_2025-03-27_150003660.png</image:loc>
      <image:title>WHY MACHINES LEARN CODEBOOK BLOG - Machine Learning’s Arc Of History - JOHN HOPFIELD</image:title>
      <image:caption>Reluctant to be talk at first, Hopfield was a delightful interviewee. We talked of his turn from physics to neural networks and his design of Hopfield Networks, for which he won the 2024 Nobel Prize in physics.</image:caption>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/833ce830-8d33-4fd7-8c16-0ca86a064c26/image_2025-03-27_152404630.png</image:loc>
      <image:title>WHY MACHINES LEARN CODEBOOK BLOG - Machine Learning’s Arc Of History - YANN LeCUN</image:title>
      <image:caption>LeCun was Hinton’s postdoc and went on to found his own lab. Designed the first convolutional neural networks and so much more. Again, a seminal figure in the history of modern deep neural networks. Turing Award winner along with Hinton and Yoshua Bengio</image:caption>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/fe392029-9743-451a-a723-472b984fec6b/image_2025-03-27_152942333.png</image:loc>
      <image:title>WHY MACHINES LEARN CODEBOOK BLOG - Machine Learning’s Arc Of History - ALETHEA POWER</image:title>
      <image:caption>Power’s team at OpenAI stumbled upon and analyzed the phenomenon of Grokking: a neural network trained way past the point of interpolation ends up discovering a simpler solution; it ‘groks’ — or becomes the solution</image:caption>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/f386fdaa-681f-4865-bca5-de524b1af429/image_2025-03-27_151245139.png</image:loc>
      <image:title>WHY MACHINES LEARN CODEBOOK BLOG - Machine Learning’s Arc Of History - GEOFFERY HINTON</image:title>
      <image:caption>Hinton is the glue that connects the first AI winter, when no one was working on neural networks except a few like him, to today, when everyone is working on deep nets. Backpropagation, AlexNet and much else. Of course, Nobel Laureate along with Hopfield.</image:caption>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/3c95ff77-8d84-4e72-8bb4-a38d80773269/image_2025-03-27_144947814.png</image:loc>
      <image:title>WHY MACHINES LEARN CODEBOOK BLOG - Machine Learning’s Arc Of History - PETER HART</image:title>
      <image:caption>Hart developed the rigorous math behind the Cover-Hart K-Nearest Neighbor rule, one of the seminal machine learning algorithms. It was his PhD thesis, with his advisor Thomas Cover, who was barely a few years older.</image:caption>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/b86a19f5-4009-4d28-889d-4ff14f2e40e0/Book+Small.jpeg</image:loc>
      <image:title>WHY MACHINES LEARN CODEBOOK BLOG - Machine Learning’s Arc Of History</image:title>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/6e4232fd-454c-44bf-be9d-702dd06824a9/image_2025-03-27_145701019.png</image:loc>
      <image:title>WHY MACHINES LEARN CODEBOOK BLOG - Machine Learning’s Arc Of History - ISABELLE GUYON</image:title>
      <image:caption>Guyon was the brains behind support vector machines (SVMs), with Vladimir Vapnik and Bernhard Boser. She did not get enough credit during the 1990s for SVMs, but the community would eventually recognize her contributions</image:caption>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/6876cd9d-aae3-4c1a-9d31-8e1d9bb47680/image_2025-03-27_153448014.png</image:loc>
      <image:title>WHY MACHINES LEARN CODEBOOK BLOG - Machine Learning’s Arc Of History - MISHA BELKIN</image:title>
      <image:caption>Belkin, a mathematician and deep learning theorist, has been trying to make sense of why deep neural networks generalize despite being over-parameterized, and has analyzed in particular the so-called double descent behavior of deep nets</image:caption>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/f7027338-a6a5-4582-a88e-e5a8af522614/image_2025-03-27_151848057.png</image:loc>
      <image:title>WHY MACHINES LEARN CODEBOOK BLOG - Machine Learning’s Arc Of History - GEORGE CYBENKO</image:title>
      <image:caption>Cybenko is credited for proving the first universal approximation theorem: a neural network with a single hidden layer, given enough neurons, can approximate any function. This and theorems that followed made people believe in neural networks</image:caption>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/d2e83603-c128-4c87-9886-e2fb40161a62/image_2025-03-27_143232437.png</image:loc>
      <image:title>WHY MACHINES LEARN CODEBOOK BLOG - Machine Learning’s Arc Of History - BERNIE WIDROW</image:title>
      <image:caption>Widrow, along with Ted Hoff, developed the LMS algorithm, an extremely noisy algebraic formulation of stochastic gradient descent, which he used to train his single neurons, called ADALINE, for adaptive linear neuron</image:caption>
    </image:image>
  </url>
  <url>
    <loc>http://anilananthaswamy.com/why-machines-learn-codebook-blog/the-role-of-vectors-in-ml</loc>
    <changefreq>monthly</changefreq>
    <priority>0.5</priority>
    <lastmod>2025-03-16</lastmod>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/b86a19f5-4009-4d28-889d-4ff14f2e40e0/Book+Small.jpeg</image:loc>
      <image:title>WHY MACHINES LEARN CODEBOOK BLOG - The Role of Vectors in Machine Learning</image:title>
      <image:caption>“A masterpiece” - Geoff Hinton</image:caption>
    </image:image>
  </url>
  <url>
    <loc>http://anilananthaswamy.com/why-machines-learn-codebook-blog/the-many-ways-to-grok-ml</loc>
    <changefreq>monthly</changefreq>
    <priority>0.5</priority>
    <lastmod>2025-03-19</lastmod>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/1742085860903-G5WWLAYEWD67KEKUEVPZ/Book+Small.jpeg</image:loc>
      <image:title>WHY MACHINES LEARN CODEBOOK BLOG - The Many Ways To Grok Machine Learning</image:title>
      <image:caption>“Your own version of deep learning—with deep pleasure and insight along the way.” — Steven Strogatz, New York Times bestselling author of Infinite Powers and professor of mathematics at Cornell University</image:caption>
    </image:image>
  </url>
  <url>
    <loc>http://anilananthaswamy.com/why-machines-learn-codebook-blog/theoretical-minimum</loc>
    <changefreq>monthly</changefreq>
    <priority>0.5</priority>
    <lastmod>2025-03-25</lastmod>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/b86a19f5-4009-4d28-889d-4ff14f2e40e0/Book+Small.jpeg</image:loc>
      <image:title>WHY MACHINES LEARN CODEBOOK BLOG - The Theoretical Minimum (for Machine Learning)…And Why - “a masterpiece” — Geoff Hinton</image:title>
      <image:caption>“masterful work”— Melanie Mitchell</image:caption>
    </image:image>
  </url>
  <url>
    <loc>http://anilananthaswamy.com/why-machines-learn-codebook-blog/conceptual-simplicity-of-ml</loc>
    <changefreq>monthly</changefreq>
    <priority>0.5</priority>
    <lastmod>2024-12-09</lastmod>
  </url>
  <url>
    <loc>http://anilananthaswamy.com/why-machines-learn-codebook-blog/hopfield-networks</loc>
    <changefreq>monthly</changefreq>
    <priority>0.5</priority>
    <lastmod>2024-10-10</lastmod>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/ea5557f2-2ffc-405c-aa12-ca27fc9efc85/Fig6.png</image:loc>
      <image:title>WHY MACHINES LEARN CODEBOOK BLOG - LLM Prompts for Learning About Hopfield Networks - Prompt 7: Make the Gaussian noise stronger</image:title>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/59fae5e1-3dec-4237-8413-61c915621f94/Fig5.png</image:loc>
      <image:title>WHY MACHINES LEARN CODEBOOK BLOG - LLM Prompts for Learning About Hopfield Networks - The code worked! You can see the stored image (above left), the corrupted image in the center, and the retrieved image (above right). I now wanted the corrupted image to have more noise.</image:title>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/36524b72-05c4-4b67-936f-cf57f60f28fc/Fig7.png</image:loc>
      <image:title>WHY MACHINES LEARN CODEBOOK BLOG - LLM Prompts for Learning About Hopfield Networks - The output of running the code inside the Jupyter Notebook was almost the same as what’s shown on the left. There were some changes. The Process New Digit button was above the images. Also, the code in the Jupyter Notebook generated two rows of such images, only one of which was updated when you press the Process New Digit button. I didn’t debug it further.</image:title>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/b95dd3a5-c949-43d4-b6d0-690459748195/Fig2.png</image:loc>
      <image:title>WHY MACHINES LEARN CODEBOOK BLOG - LLM Prompts for Learning About Hopfield Networks - Prompt 2: These images have associated labels. Could you also provide a way to select an appropriate digit, such as 8 or 5, and plot only that?</image:title>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/8bdbfdc5-8ad8-48c4-b943-ba212e086359/Fig8.png</image:loc>
      <image:title>WHY MACHINES LEARN CODEBOOK BLOG - LLM Prompts for Learning About Hopfield Networks - Sometimes you will see results like the one shown above. I changed the amount of Gaussian noise that was added to the image to corrupt it (tweaked the mean and standard deviation from (0, 1.5) to (2, 3.5)). The Hopfield network recovered a bit-flipped image: black became white and vice-versa. Can you figure out why? Think about energy minimums. For more detail, please have a look at Chapter 8 of WHY MACHINES LEARN.</image:title>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/265ccc4a-62cb-4fb1-88fa-a03ea25f6505/Fig4.png</image:loc>
      <image:title>WHY MACHINES LEARN CODEBOOK BLOG - LLM Prompts for Learning About Hopfield Networks - Prompt 4: Okay, now, create function to map the binarized images into images where 0 is -1 and 1 is 1.</image:title>
      <image:caption>Note: It’s worth looking at Claude’s response, because it “gets” the reason for the prompt!</image:caption>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/bb56a0d7-2a1b-499a-adfc-d9fafe31ebb0/Fig3.png</image:loc>
      <image:title>WHY MACHINES LEARN CODEBOOK BLOG - LLM Prompts for Learning About Hopfield Networks - Prompt 3: You are selecting different random indices for the original and binarized images. Make sure that the random indices are the same.</image:title>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/11a7b368-790f-4eeb-8583-73d3721b9985/Fig1.png</image:loc>
      <image:title>WHY MACHINES LEARN CODEBOOK BLOG - LLM Prompts for Learning About Hopfield Networks - Prompt 1: Please write code to load the MNIST dataset and turn each image in that dataset into an image where each pixel is either 0 or 1, depending on whether the grayscale value in the original image is less than or equal to 120 or greater than 120, respectively</image:title>
    </image:image>
  </url>
  <url>
    <loc>http://anilananthaswamy.com/why-machines-learn-codebook-blog/monty-hall</loc>
    <changefreq>monthly</changefreq>
    <priority>0.5</priority>
    <lastmod>2024-09-28</lastmod>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/aedd03d4-fc38-458e-8272-731ebd038359/Image.png</image:loc>
      <image:title>WHY MACHINES LEARN CODEBOOK BLOG - The Monty Hall Problem: Could an LLM have convinced Paul Erdős? - Prompt</image:title>
      <image:caption>Can you write python code that simulates the game? It's a Monte Carlo simulation. Calculate the probability that the contestant wins when they choose to switch and when they choose not to switch, by running 10,000 trials. Then plot the odds of winning for either case (Y-axis) against the number of trials (X-axis) Output: The graph produced by running the generated code i shown here.</image:caption>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/400044fd-df25-43a3-9e4f-2b6c32e1a8f2/monty_hall_animation.gif</image:loc>
      <image:title>WHY MACHINES LEARN CODEBOOK BLOG - The Monty Hall Problem: Could an LLM have convinced Paul Erdős? - Make it stand out</image:title>
      <image:caption>Animation of how probabilities converge to 1/3 and 2/3 over 10,000 trials of the Monty Hall Game</image:caption>
    </image:image>
  </url>
  <url>
    <loc>http://anilananthaswamy.com/why-machines-learn-codebook-blog/from-rosenblatt-to-claude</loc>
    <changefreq>monthly</changefreq>
    <priority>0.5</priority>
    <lastmod>2024-12-26</lastmod>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/c9ecdbbc-f0ac-4de1-948e-173508846233/perceptron_convergence.gif</image:loc>
      <image:title>WHY MACHINES LEARN CODEBOOK BLOG - From Rosenblatt to Claude - This is the final GIF generated by the Claude-generated code. The code allows you to select your data points, and it then uses the perceptron algorithm to find a line that separates the circles from the triangles.</image:title>
      <image:caption>This is the final GIF generated by the Claude-generated code. The code allows you to select your data points, and it then uses the perceptron algorithm to find a line that separates the circles from the triangles.</image:caption>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/33c95b8c-35a0-4cdc-b6f7-aa54b3302fa6/First+Image.png</image:loc>
      <image:title>WHY MACHINES LEARN CODEBOOK BLOG - From Rosenblatt to Claude - Output of Claude’s code</image:title>
      <image:caption>Claude generated code that worked without any errors. I was able to interact with the UI and select 10 data points, 5 for circles and 5 for triangles. But you can see that the plot doesn’t look exactly like what I asked for. So, I prompted it a little more, to create code that could generate a plot with solid lines for the axes, no bounding box, etc.</image:caption>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/d2064d90-ed65-4a8c-8fbe-f030f6b80d24/Fourth+Image.png</image:loc>
      <image:title>WHY MACHINES LEARN CODEBOOK BLOG - From Rosenblatt to Claude - Output of Claude’s code</image:title>
      <image:caption>This is the output after a couple of iterations of simple prompting. Okay, close enough. Ideally, I should asked Claude to make the circles and triangles to have gray “fill”, but I can now work with this. So, I gave Claude a new prompt. Prompt: Great. Now, once the user has finished clicking 10 times and generating the circles and triangles, when the user clicks next, use that input to kick off a perceptron algorithm, to find a straight line that separates the circles from the triangles. Once the perceptron finds the line, please draw the line</image:caption>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/77fcf416-c91e-4e09-a45f-536b67bdaa42/Fifth+Image.png</image:loc>
      <image:title>WHY MACHINES LEARN CODEBOOK BLOG - From Rosenblatt to Claude - Output of Claude’s code</image:title>
      <image:caption>Okay. This was a big change. The code that Claude generated had significantly more functionality than the previous version which was simply allowing me to select the data points. This time, it’s actually implemented a perceptron algorithm and plotted the linearly separating hyperplane. Next, I wanted to visualize in the form of an animation, where the output involved plotting some of the incorrect hyperplanes, and ending with the correct one. Getting this to work took some prompting. Below is the series of prompts that got it to work (I show only the important prompts; there were simple ones I haven’t included, to do with the look-and-feel of the UI which aren’t that important).</image:caption>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/1258ddf9-9cba-407e-a4e5-6bb5eb57e0ad/10-8+2D+Classification+Solution.jpg</image:loc>
      <image:title>WHY MACHINES LEARN CODEBOOK BLOG - From Rosenblatt to Claude - Prompt</image:title>
      <image:caption>Please look at the image provided. Can you write code that does the following: Provide a matplotlib interactive user interface that allows the user to click on a 2D graph. The first 5 clicks should be used for circles, the second 5 clicks should be used for triangles.</image:caption>
    </image:image>
  </url>
  <url>
    <loc>http://anilananthaswamy.com/talks-bedford</loc>
    <changefreq>daily</changefreq>
    <priority>0.75</priority>
    <lastmod>2024-05-11</lastmod>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/52a74d9ae4b0253945d2aee9/1400275370693-L0ZTMZY7L6SCYQG6GO85/xzuvhgdQGul0amA3Qc7a_373A9681.jpg</image:loc>
      <image:title>Talks</image:title>
    </image:image>
  </url>
  <url>
    <loc>http://anilananthaswamy.com/talks-bedford/2014/1/23/get-out-there-anapm</loc>
    <changefreq>monthly</changefreq>
    <priority>0.5</priority>
    <lastmod>2024-05-12</lastmod>
  </url>
  <url>
    <loc>http://anilananthaswamy.com/talks-bedford/2014/1/23/why-deserts-matter-too-5pnzc</loc>
    <changefreq>monthly</changefreq>
    <priority>0.5</priority>
    <lastmod>2024-05-12</lastmod>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/52a74d9ae4b0253945d2aee9/1390513173585-DWV4B9Z0B3GY1421D88O/tumblr_mjdttrQLOJ1rkz363o1_1280.jpg</image:loc>
      <image:title>Talks - Why Deserts Matter Too</image:title>
    </image:image>
  </url>
  <url>
    <loc>http://anilananthaswamy.com/talks-bedford/2014/1/23/appalachia-zjmle</loc>
    <changefreq>monthly</changefreq>
    <priority>0.5</priority>
    <lastmod>2024-05-12</lastmod>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/52a74d9ae4b0253945d2aee9/1390512900953-CBP39T42JYJB4AT50XTE/tumblr_mjs7w6zIHV1rkz363o3_1280.jpg</image:loc>
      <image:title>Talks - Success Story – New Hope in Old Appalachia</image:title>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/52a74d9ae4b0253945d2aee9/1390512900611-EF33OSHTK3F9FL7P3XQO/tumblr_mjs7w6zIHV1rkz363o5_1280.jpg</image:loc>
      <image:title>Talks - Success Story – New Hope in Old Appalachia</image:title>
    </image:image>
  </url>
  <url>
    <loc>http://anilananthaswamy.com/talks</loc>
    <changefreq>daily</changefreq>
    <priority>0.75</priority>
    <lastmod>2024-05-14</lastmod>
  </url>
  <url>
    <loc>http://anilananthaswamy.com/the-man-who-wasnt-there</loc>
    <changefreq>daily</changefreq>
    <priority>0.75</priority>
    <lastmod>2024-05-13</lastmod>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/1442818118522-UC0Q1Q41QYJI4B0HGX4F/google+talk+snapshot.jpg</image:loc>
      <image:title>The Man Who Wasn't There</image:title>
      <image:caption />
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/1442818345406-EE1NBHWZOJAQXLUUXH4M/TMWWT+BW.png</image:loc>
      <image:title>The Man Who Wasn't There</image:title>
      <image:caption />
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/1471770524447-4780U0OUCX9MIBWAVBKL/9781101984321.jpg</image:loc>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/1440700281936-SWPXETL2G4ZW8T00SR62/TheManWhoWasntThere-galley-final_page1_image8.jpg</image:loc>
      <image:title>The Man Who Wasn't There</image:title>
    </image:image>
  </url>
  <url>
    <loc>http://anilananthaswamy.com/the-edge-of-physics</loc>
    <changefreq>daily</changefreq>
    <priority>0.75</priority>
    <lastmod>2024-05-13</lastmod>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/1429503548574-C5SOM9EPRHXDGIUQ8NUP/edgeofphysics_cover_HR.jpg</image:loc>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/1429501064979-X1GX2KUGHF40QNP92Y1E/DSC_0242_alt.jpg</image:loc>
      <image:title>The Edge of Physics</image:title>
    </image:image>
  </url>
  <url>
    <loc>http://anilananthaswamy.com/home</loc>
    <changefreq>daily</changefreq>
    <priority>1.0</priority>
    <lastmod>2024-08-13</lastmod>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/7a31c19f-ee6a-42c5-a014-d57184e4dcb6/WhatsApp+Image+2024-03-30+at+23.13.01.jpg</image:loc>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/1707199546908-L68TS3S9LBOR3R9K22ZF/all+covers.png</image:loc>
      <image:title>Anil Ananthaswamy</image:title>
    </image:image>
  </url>
  <url>
    <loc>http://anilananthaswamy.com/through-two-doors-at-once</loc>
    <changefreq>daily</changefreq>
    <priority>0.75</priority>
    <lastmod>2024-05-13</lastmod>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/421eba7a-fe1d-446a-9298-b9109f6ba465/20180806_161939.jpg</image:loc>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/1542468674673-6ACRPR9SAM4BNTLFPMR7/Double_slit_x-ray_simulation_polychromatic_false-color.jpg</image:loc>
      <image:title>THROUGH TWO DOORS AT ONCE</image:title>
    </image:image>
  </url>
  <url>
    <loc>http://anilananthaswamy.com/why-machines-learn</loc>
    <changefreq>daily</changefreq>
    <priority>0.75</priority>
    <lastmod>2024-09-29</lastmod>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/1707196351977-ZVSTCOV9C3AQGX13Z0BY/WhyMachinesLearn_Lifestyle_4.png</image:loc>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/1707196151262-C7PHAH1JTAW7U1R4U9VN/banner+covers.jpg</image:loc>
      <image:title>Why Machines Learn</image:title>
    </image:image>
  </url>
  <url>
    <loc>http://anilananthaswamy.com/new-page-2</loc>
    <changefreq>daily</changefreq>
    <priority>0.75</priority>
    <lastmod>2024-05-11</lastmod>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/6077349d-03d0-41dc-b00a-c1ad808bf74a/image_2024-05-11_153855142.png</image:loc>
      <image:title>Magazine Articles</image:title>
      <image:caption>Image: Irene Pérez for Quanta Magazine</image:caption>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/85b44e9b-357b-479d-8b75-54b717427b53/LLM_Emergence-byMyriamWares-Lede-scaled.jpeg</image:loc>
      <image:title>Magazine Articles</image:title>
      <image:caption>Image: Myriam Wares for Quanta Magazine</image:caption>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/85b44e9b-357b-479d-8b75-54b717427b53/LLM_Emergence-byMyriamWares-Lede-scaled.jpeg</image:loc>
      <image:title>Magazine Articles</image:title>
      <image:caption>Image: Myriam Wares for Quanta Magazine</image:caption>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/6077349d-03d0-41dc-b00a-c1ad808bf74a/image_2024-05-11_153855142.png</image:loc>
      <image:title>Magazine Articles</image:title>
      <image:caption>Image: Irene Pérez for Quanta Magazine</image:caption>
    </image:image>
  </url>
  <url>
    <loc>http://anilananthaswamy.com/landing-bedford</loc>
    <changefreq>daily</changefreq>
    <priority>0.75</priority>
    <lastmod>2024-05-11</lastmod>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/71487b71-9c23-47bf-9a93-a13a493548a5/TED2022_20220413_2GT2674.jpg</image:loc>
      <image:title>Home - about</image:title>
      <image:caption>Anil Ananthaswamy is an award-winning science writer and former staff writer and deputy news editor for the London-based New Scientist magazine. He is a 2019-20 MIT Knight Science Journalism fellow. He has been a guest editor for the science writing program at the University of California, Santa Cruz, and organizes and teaches an annual science writing workshop at the National Centre for Biological Sciences in Bengaluru, India. He is a freelance feature editor for PNAS Front Matter. He writes for regularly for New Scientist, Quanta, Scientific American, PNAS Front Matter and Nature, and has contributed to Nautilus, Matter, The Wall Street Journal, Discover and the UK’s Literary Review, among others. His first book, The Edge of Physics, was voted book of the year in 2010 by UK’s Physics World, and his second book, The Man Who Wasn’t There, was long-listed for the 2016 Pen/E. O. Wilson Literary Science Writing Award.  His most recent book, Through Two Doors at Once was named one of Smithsonian's Favorite Books of 2018 and one of Forbes's 2018 Best Books About Astronomy, Physics and Mathematics.</image:caption>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/3b630d51-b264-4e3e-98cb-9806a3b61be1/WhatsApp+Image+2024-03-30+at+23.13.01.jpg</image:loc>
      <image:title>Home - education</image:title>
      <image:caption>Anil trained as an electronics and computer engineer at the Indian Institute of Technology, Madras (BSEE) and the University of Washington, Seattle (MSEE), and was working as a distributed systems software architect before switching to writing. Of late, he’s rediscovered his passion for engineering, and has retrained in aspects of machine learning and deep neural networks, during his stay at MIT and via eCornell, Cornell University’s online program.</image:caption>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/71487b71-9c23-47bf-9a93-a13a493548a5/TED2022_20220413_2GT2674.jpg</image:loc>
      <image:title>Home - about</image:title>
      <image:caption>Anil Ananthaswamy is an award-winning science writer and former staff writer and deputy news editor for the London-based New Scientist magazine. He is a 2019-20 MIT Knight Science Journalism fellow. He has been a guest editor for the science writing program at the University of California, Santa Cruz, and organizes and teaches an annual science writing workshop at the National Centre for Biological Sciences in Bengaluru, India. He is a freelance feature editor for PNAS Front Matter. He writes for regularly for New Scientist, Quanta, Scientific American, PNAS Front Matter and Nature, and has contributed to Nautilus, Matter, The Wall Street Journal, Discover and the UK’s Literary Review, among others. His first book, The Edge of Physics, was voted book of the year in 2010 by UK’s Physics World, and his second book, The Man Who Wasn’t There, was long-listed for the 2016 Pen/E. O. Wilson Literary Science Writing Award.  His most recent book, Through Two Doors at Once was named one of Smithsonian's Favorite Books of 2018 and one of Forbes's 2018 Best Books About Astronomy, Physics and Mathematics.</image:caption>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/3b630d51-b264-4e3e-98cb-9806a3b61be1/WhatsApp+Image+2024-03-30+at+23.13.01.jpg</image:loc>
      <image:title>Home - education</image:title>
      <image:caption>Anil trained as an electronics and computer engineer at the Indian Institute of Technology, Madras (BSEE) and the University of Washington, Seattle (MSEE), and was working as a distributed systems software architect before switching to writing. Of late, he’s rediscovered his passion for engineering, and has retrained in aspects of machine learning and deep neural networks, during his stay at MIT and via eCornell, Cornell University’s online program.</image:caption>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/1715448748411-GNJ5V9QGSD4A20S1IU65/banner+covers.jpg</image:loc>
      <image:title>Home</image:title>
    </image:image>
  </url>
  <url>
    <loc>http://anilananthaswamy.com/what-we-do-bedford-1</loc>
    <changefreq>daily</changefreq>
    <priority>0.75</priority>
    <lastmod>2024-05-11</lastmod>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/52a74d9ae4b0253945d2aee9/1390513380961-PTHFXE5U2S1FJSLPXUSD/tumblr_mh1iruZWLf1rkz363o1_1280.jpg</image:loc>
      <image:title>What We Do</image:title>
    </image:image>
  </url>
  <url>
    <loc>http://anilananthaswamy.com/new-page-3</loc>
    <changefreq>daily</changefreq>
    <priority>0.75</priority>
    <lastmod>2024-05-12</lastmod>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/1715451391244-8NVRKOX1FYLRJ1S1X2CV/GrokkingNNS-byIrenePerez-Lede-scaled.jpeg</image:loc>
      <image:title>New Page - quanta magazine / machine learning</image:title>
      <image:caption>How Do Machines ‘Grok’ Data? By apparently overtraining them, researchers have seen neural networks discover novel solutions to problems. Image: Irene Pérez for Quanta Magazine</image:caption>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/1715451391244-8NVRKOX1FYLRJ1S1X2CV/GrokkingNNS-byIrenePerez-Lede-scaled.jpeg</image:loc>
      <image:title>New Page - quanta magazine / machine learning</image:title>
      <image:caption>How Do Machines ‘Grok’ Data? By apparently overtraining them, researchers have seen neural networks discover novel solutions to problems. Image: Irene Pérez for Quanta Magazine</image:caption>
    </image:image>
  </url>
  <url>
    <loc>http://anilananthaswamy.com/work</loc>
    <changefreq>daily</changefreq>
    <priority>0.75</priority>
    <lastmod>2024-05-12</lastmod>
  </url>
  <url>
    <loc>http://anilananthaswamy.com/articles</loc>
    <changefreq>daily</changefreq>
    <priority>0.75</priority>
    <lastmod>2025-03-24</lastmod>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/a8be2d21-8f62-4e6f-9091-bd889487c7d4/DifussionModels-bySutterstockSamuelVelasco-Lede-scaled.jpeg</image:loc>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/49a21e74-64a5-4f39-96ea-39e54a579063/Quantum-Observers_2880_Lede.jpg</image:loc>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/f1e2939b-bee8-4364-8dbd-17e326dd91f9/Screenshot+2025-01-04+at+1.20.01%E2%80%AFPM.png</image:loc>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/e9fff19a-b317-415a-9182-48364a2ae179/Screenshot+2025-01-04+at+1.11.35%E2%80%AFPM.png</image:loc>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/1715451398302-XNWEZ6I5KMMZJK1Y1IJX/LLM_Emergence-byMyriamWares-Lede-scaled.jpeg</image:loc>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/1715451404833-FKGM7RMQ7Y9I0GOM2XLE/HyperDimensionalComputing-byMyriamWares-Lede-scaled.jpeg</image:loc>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/873aad03-4f7e-4ac8-ab4c-270dc7808041/Screenshot+2025-01-04+at+1.15.47%E2%80%AFPM.png</image:loc>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/19b8251d-d25e-4a7c-9b04-b220c896d28a/Meta_Net_2880x1620_Lede.jpg</image:loc>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/508cfb3b-0b01-4c51-934a-11e002935230/Predicitve-Coding_2K_Lede.jpg</image:loc>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/daa5a14f-b2f2-4128-b37d-a09d86d1cb11/image_2024-08-09_120824048.png</image:loc>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/b4e1e6e2-e782-4e8d-800f-0ef20e4bbb33/Screenshot+2025-01-04+at+12.46.27%E2%80%AFPM.png</image:loc>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/7d163444-f055-4e1c-8973-cc135d680248/Collapsing+Sheets+of+Spacetime.jpeg</image:loc>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/1715421667674-116FMZ7O4NJMI4KCPFL0/ANNs+clues+to+how+brains+learn.png</image:loc>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/d5e89c19-8b36-46b4-9eb9-97fddcdfe053/d41586-023-00641-w_24084982.jpeg</image:loc>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/1715451391244-8NVRKOX1FYLRJ1S1X2CV/GrokkingNNS-byIrenePerez-Lede-scaled.jpeg</image:loc>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/9945bf12-5426-4667-bc8c-562ee2ebdafa/BrainDeepNets-2880x1620-Lede.jpg</image:loc>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/7ac700c8-4340-473a-b5eb-8f2b4aa40baf/Bowl_2880_Lede.gif</image:loc>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/2842f1ff-b1bd-4952-a8c2-d1bbe4907a52/Self_learning_AI_2880x1620_Lede-scaled.jpg</image:loc>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/a31f4e94-fcf5-4379-8a27-2a8c41c47332/SEI_161137300.jpeg</image:loc>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/06bab8bb-2149-4288-8411-8d555b949f67/d41586-023-01938-6_25458082.jpeg</image:loc>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/f18b3661-c882-493d-9cd2-4f735b511f66/Screenshot+2025-01-04+at+12.48.05%E2%80%AFPM.png</image:loc>
    </image:image>
  </url>
  <url>
    <loc>http://anilananthaswamy.com/teaching</loc>
    <changefreq>daily</changefreq>
    <priority>0.75</priority>
    <lastmod>2024-05-13</lastmod>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/a78befa8-eab3-4f06-9ec4-4607b0ee7a4f/Teleporter.jpg</image:loc>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/5229e85a-3498-4f9b-ac09-62906839078d/IMG_0015.JPG</image:loc>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/6767d910-e3bf-452e-94fb-0924a054b5bd/20180726_154650.cropped.jpg</image:loc>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/693a8c10-57d5-4788-a1a3-9d715e37ca83/DSC_0018+%281%29.jpg</image:loc>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/d2e2fdf2-635d-4354-9cad-6e2945609e51/P1030916.JPG</image:loc>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/88eeb3a6-a0c6-44c6-9c22-c7c9b707b020/IMG_0654.JPG</image:loc>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/e05274e0-7141-4776-b3c1-2b96fe45db01/20170814_181125.jpg</image:loc>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/1c4c8ae8-57ea-402d-a358-9826a2c4cb01/20230224_114034.jpg</image:loc>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/833085fd-5b63-4882-b1ea-4844142faf6f/20150613_154818.cropped.jpg</image:loc>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/cfcdd998-5273-4f31-9486-90e07e0079bb/20190720_113005.jpg</image:loc>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/e05274e0-7141-4776-b3c1-2b96fe45db01/20170814_181125.jpg</image:loc>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/e506cec5-332f-48c0-a548-174fd4443488/WP_20131011_001.jpg</image:loc>
    </image:image>
  </url>
  <url>
    <loc>http://anilananthaswamy.com/whymachineslearn-errata</loc>
    <changefreq>daily</changefreq>
    <priority>0.75</priority>
    <lastmod>2024-10-21</lastmod>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/41ceda0b-9fdd-425f-8a62-5321f00d5947/Sigmoid.png</image:loc>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/55311f53e4b05d44413d2e7e/fe4276aa-2c1b-402f-8b2b-dbb352a345e4/WML+On+Wood.jpeg</image:loc>
    </image:image>
  </url>
  <url>
    <loc>http://anilananthaswamy.com/general-2</loc>
    <changefreq>daily</changefreq>
    <priority>0.75</priority>
    <lastmod>2025-08-31</lastmod>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/5ec321c2af33de48734cc929/1618497259178-6XJGK9GR6YAVBQL5L519/20140301_Trade-151_012-2.jpg</image:loc>
      <image:title>General  2</image:title>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/5ec321c2af33de48734cc929/1607694583486-2PQT0LQ193RL7MCB6DX4/20140228_Trade+151_0046.jpg</image:loc>
      <image:title>General  2</image:title>
    </image:image>
    <image:image>
      <image:loc>https://images.squarespace-cdn.com/content/v1/5ec321c2af33de48734cc929/1607694644871-IC85FNH781UNZSZEGHDR/Aro+Ha_0428.jpg</image:loc>
      <image:title>General  2</image:title>
    </image:image>
  </url>
</urlset>

