Quotes

Reinforcement Learning

  1. David Silver: AlphaGo, AlphaZero, and Deep Reinforcement Learning | AI Podcast #86 with Lex Fridman

    1. It just taught us yet again that you have to have faith in your systems when they exceed your own level of ability (judgement). You have to trust in them to know better than you, the designer, once you've stowed in them the ability to judge better than you can. Trust the system to do so.

    2. What do you think is the reward function (meaning) of human life? There are many levels at which you can understand a system as optimizing for a goal at many levels. A goal of the universe is to maximize entropy (by the second law of thermodynamics). Maybe evolution is something that the universe discovered in order to dissipate energy as efficiently as possible. It's natural in order to achieve that high level goal those individual organisms discover brains intelligences which enable them to support the goals of evolution. It's also the reason why we build AI.

AI Interpretation / Understanding

  1. Stephen Wolfram: Cellular Automata, Computation, and Physics | AI Podcast #89 with Lex Fridman

    1. How can we communicate with aliens? AI is our first sort of example of alien intelligence. If you were in the middle of a neural net, and you open it up, and what it is thinking, can you discuss things with it? it's not easy but it's not absolutely impossible.

    2. There really isn't a bright line between the intelligent and the merely computation.

    3. Computation is a robust notion as energy. Different types of computation are equivalent. Simple rules can create equivalently complex Turing machine systems. (Principle of computational equivalence. PCE.)

    4. He (Richard Feynman) was really really really good at calculating stuff but he thought that was easy because he was really good at it. He thought the really impressive thing was to have this simple intuition about how everything works so he invented that at the end. And because he'd done this calculation and knew what how it worked, it was a lot easier to have good intuition when you know what the answer is.

    5. Even randomness can be an emergent phenomenon.

    6. I'm one of these people who at this point, if somebody tells me something, and I just don't understand it. My conclusion isn't that means I'm dumb. My conclusion is there's something wrong with what I'm being told.

    7. For 300 years, it's kind of like the mathematical equations approach was the winner. It was the thing if you want to have a really good model for something. In the last decade or so I think one can see a transition to using not mathematical equations but programs as sort of the raw material for making models of stuff. (instantaneous paradigm shift)

    8. It is one of these cases where it's easier to do the whole thing than it is to do some piece of it. (e.g. expert system, NLP)

    9. About Wolfram Language

      1. Interesting codes of Mathematica

      2. 2019 One-Liner competition

      3. Newest version (12.1, 2020.03) of Wolfram Language

      4. We May Have a Path to the Fundamental Theory of Physics

Math

  1. John Conway

    1. You know, people think mathematics is complicated. Mathematics is the simple bit. Its the stuff we can understand. It's cats that are complicated. I mean, what is it in those little molecules and stuff that make one cat behave differently than another, or that make a cat? And how do you define a cat? I have no idea.