This page is for indexing, summarizing, and further discussion of ideas brought up on LessWrong.
- /action: defined here
- /applause light
- /Bayesian statistics
- /conditional probability function: defined here
- /expected utility: defined here
- /friendly AI
- /instrumental value (cf /terminal value): explained here
- /Omega: a fictional character often used in thought experiments
- /outcome: defined here
- /terminal value (cf /instrumental value): explained here
- /unfriendly AI
- /utility function: defined here
Many of the key ideas under discussion at LW are first put forth in the form of a "sequence", or a series of blog posts. These are often presented in the form of fiction, and consequently it may be rather time-consuming if all you really want is to understand the essential idea(s) being set forth in the writing.
The central ideas behind LW sequences may be summarized here.
- /Three Worlds Collide: a parable