alex peysakhovich 🤖 Profile Banner
alex peysakhovich 🤖 Profile
alex peysakhovich 🤖

@alex_peys

Followers
5,337
Following
789
Media
139
Statuses
1,191
Explore trending content on Musk Viewer
@alex_peys
alex peysakhovich 🤖
10 months
really love when authors do things like explain equations in-line with colors etc... just makes papers so much easier to read.
Tweet media one
40
338
4K
@alex_peys
alex peysakhovich 🤖
10 months
once did a screening interview with a “famous ml hedgefund”. got a leetcode-style problem to find connected components in a graph. they wanted bfs or dfs or whatever. i took the svd of the laplacian and counted the number of 0s. they didn’t pass me because “you can’t use numpy”
32
68
2K
@alex_peys
alex peysakhovich 🤖
1 year
standard ml: oh no my model is memorizing the training set, better add some regularization to make that not happen llm ml: ugh it’s hallucinating, why can’t it just memorize some of the training set
10
33
336
@alex_peys
alex peysakhovich 🤖
9 months
living the dream of the gpu upper middle class
Tweet media one
19
8
307
@alex_peys
alex peysakhovich 🤖
5 months
facebook didn't keep it secret, here is the paper that explains the early comment ranking system for exactly this issue
@paulnovosad
Paul Novosad
5 months
Engineers have figured out to cut back on the toxicity of the internet — but firms who are good at it keep it secret as a competitive advantage. A fascinating question of private vs. public interests.
Tweet media one
9
10
152
4
37
296
@alex_peys
alex peysakhovich 🤖
10 months
@Apoorva__Lal you can also estimate null space dimension by “uniformly” randomly sampling vectors, hitting them with the matrix, and seeing what % are 0. no numpy required
2
1
202
@alex_peys
alex peysakhovich 🤖
7 months
ran gpt4 128k context on the "1 useful document + K distractors" from our "attention sorting" paper, seems like the very long (more than 32k) doesn't work that well. 32k is still extremely impressive though! also claude2 clearly has some nice trick behind the scenes
Tweet media one
9
27
203
@alex_peys
alex peysakhovich 🤖
11 months
i worked at fb for 9 years, this is the first time i have ever seen people excited about an fb original product at launch time
7
7
192
@alex_peys
alex peysakhovich 🤖
4 months
the foundation of machine learning is that inside every big matrix lives a much smaller matrix and you should just use that
@krishnanrohit
rohit
4 months
Text embeddings are the unsung heroes of making ai work. Can't believe we found a way to make language into numbers, the matrix stuff backprop etc afterwards feels almost straightforward once it's done. All tricks are downstream of this. With numbers you can add them (merge),…
8
13
117
7
16
193
@alex_peys
alex peysakhovich 🤖
8 months
@infoxiao this is literally the recipe for llms
Tweet media one
3
21
174
@alex_peys
alex peysakhovich 🤖
2 months
this is just proof that agi is achieved, we can now simulate a real software engineer perfectly
@a_karvonen
Adam Karvonen
2 months
Interesting watch. In an official Devin demo, Devin spent six hours writing buggy code and fixing its buggy code when it could have just ran the two commands in the repo's README.
5
19
280
1
11
158
@alex_peys
alex peysakhovich 🤖
1 year
there is a huge gain to be made by every company on the planet by just running every file in their codebase through gpt4 with the prompt "what's wrong with this?"
@DimitrisPapail
Dimitris Papailiopoulos
1 year
GPT-4 "discovered" the same sorting algorithm as AlphaDev by removing "mov S P". No RL needed. Can I publish this on nature? here are the prompts I used (excuse my idiotic typos, but gpt4 doesn't mind anyways)
96
449
3K
3
10
151
@alex_peys
alex peysakhovich 🤖
11 months
one of the reasons i left behavioral science (much happier now anyway, so it all worked out!) was that it was pretty clear many people were bs-ing and when getting jobs/tenure/etc... depends on # papers published, it is very hard to compete with people willing to make stuff up
7
8
133
@alex_peys
alex peysakhovich 🤖
8 months
lots of us are too busy working on this stuff to get in these debates on twitter, but yann is completely right here (just like he was with the cake thing, and the neural net thing, and and and)
@ylecun
Yann LeCun
8 months
The heretofore silent majority of AI scientists and engineers who - do not believe in AI extinction scenarios or - believe we have agency in making AI powerful, reliable, and safe and - think the best way to do so is through open source AI platforms NEED TO SPEAK UP !
170
408
2K
4
4
127
@alex_peys
alex peysakhovich 🤖
10 months
@ben_golub sometimes those are too complex to navigate
1
0
123
@alex_peys
alex peysakhovich 🤖
8 months
tfw one of your paper techniques gets rediscovered. now i know how everyone that wrote papers in machine learning between 1980 and 2012 feels
1
4
111
@alex_peys
alex peysakhovich 🤖
1 year
evergreen question: how does anyone ever do any data cleaning in python? pandas is the worst thing i've ever worked with compared to e.g. tidyverse
18
2
96
@alex_peys
alex peysakhovich 🤖
10 months
@tszzl i find these salaries surprisingly low given the value provided and the working conditions. if you scale cardiac surgeon to 40 hours per week that's ~400k, that's like an L6 at google/fb....
8
1
88
@alex_peys
alex peysakhovich 🤖
1 year
hype cycle for llms is moving so fast and everyone is at totally different points on the curve, it's wild
Tweet media one
5
22
86
@alex_peys
alex peysakhovich 🤖
1 year
ok, it's after 5pm, i am no longer a facebook employee, i will miss FAIR and my awesome colleagues, but it's on to new things! also, now everyone can stop asking me to help their weird uncle get unbanned, you know what they did.
14
0
87
@alex_peys
alex peysakhovich 🤖
10 months
the best conversations i've had about "ai risk" have been with integrity people from social media companies, they have the most experience with having complex systems behave in unintended ways and also deal with internal pressures eg. incentives of other teams to get metrics up
5
9
85
@alex_peys
alex peysakhovich 🤖
8 months
authorship norms differ a lot across fields... cs: "oh, we talked about this at lunch for 5 min, you should be a coauthor!" econ: "you spent weeks in a library collecting data and you want to be a coauthor? gtfo!" ...you can guess which one creates more collegial atmosphere
@SimonBowmaker
Simon Bowmaker
8 months
Daron Acemoğlu and David Card on undergraduate vs. graduate research assistants:
Tweet media one
Tweet media two
43
306
3K
3
3
79
@alex_peys
alex peysakhovich 🤖
11 months
me: oh what have you been up to today? gf: not much, just chilling *casually drops 4 nature/science papers in one day*
@AnnieFranco
Annie Franco
11 months
New in Science and Nature: The first four papers from the U.S. 2020 Facebook and Instagram Election Study!
Tweet media one
Tweet media two
Tweet media three
Tweet media four
5
40
172
0
2
79
@alex_peys
alex peysakhovich 🤖
9 months
linear algebra is the basis for everything that works
3
6
80
@alex_peys
alex peysakhovich 🤖
1 year
the work on fine tuning llms in parameter efficient ways that is coming out is just so cool and clever. really shows the power of open source + flexible frameworks that allow you to easily write model blocks and just type loss.backwards()
0
19
80
@alex_peys
alex peysakhovich 🤖
10 months
i stared at this for probably 90 seconds thinking “what else could it be other than epsilon?” had to check replies for the answer
@VisualAlgebra
Matt Macauley
10 months
The look my wife game me when I immediately shouted “epsilon”, without really thinking it through…
Tweet media one
225
565
25K
4
3
77
@alex_peys
alex peysakhovich 🤖
2 months
its crazy how data inefficient neural net optimization can be - i have a problem where a linear regression gets 80% accuracy but it takes 400k samples and 100+ epochs of training for a 2 layer relu net to match that (my learning rate is fine thanks for asking)
5
4
65
@alex_peys
alex peysakhovich 🤖
9 months
heuristic: if you’re talking about bias-variance tradeoffs you’re doing classic machine learning, if you’re saying “crap i need more gpus”, you’re doing post-modern machine learning
@chrisalbon
Chris Albon
9 months
Recently I heard someone call it ��classic machine learning” Like damn bro that hurts
36
16
350
1
4
61
@alex_peys
alex peysakhovich 🤖
10 months
explaining something i'm working on to a jr colleague, he says "oh you're one of those last generation ai people that actually knows math, that's interesting"
3
3
60
@alex_peys
alex peysakhovich 🤖
1 year
one simple statistical hack to improve your language model evaluations: don't take mean(model 1 performance) and compare it to mean(model 2 performance). instead, consider the *paired* statistic mean(model1_acc(question i) - model2_acc(question i)). in many tasks is huge latent…
Tweet media one
2
4
59
@alex_peys
alex peysakhovich 🤖
10 months
@GrantStenger agree that in theory bfs/dfs is better but if you compute rarely (are you decomposing 10m vertex graph every second? why?), it doesn’t really matter what you use as long as it runs. we did randomized svd on huge graphs all the time at fb and it was fine.
2
0
58
@alex_peys
alex peysakhovich 🤖
4 months
"a man painting a horse" by stable diffusion. it just couldn't decide how to parse that sentence so it just did both
Tweet media one
5
7
54
@alex_peys
alex peysakhovich 🤖
1 year
one reason why “big fraud” is so prevalent in behavioral science is that there are few real world checks, other sciences have more downstream engineering built on them and so it is easier to filter fake effects (not perfect, plenty of bad examples in “harder” science too)
6
3
53
@alex_peys
alex peysakhovich 🤖
1 month
europe: we are the world champions of shutting down innovation with weird regulation california: hold my beer
@psychosort
Brian Chau, SF 7th-16th, Toronto 17th-21st
1 month
The California senate bill to crush OpenAI's competitors is fast tracked for a vote. This is the most brazen attempt to hurt startups and open source yet. 🧵
Tweet media one
31
153
481
1
16
50
@alex_peys
alex peysakhovich 🤖
9 months
@aryehazan random matrix theory and various concentration results are all extremely unintuitive (at least to me).
1
2
49
@alex_peys
alex peysakhovich 🤖
1 year
@PhDemetri just add polynomial terms til it's good
2
1
45
@alex_peys
alex peysakhovich 🤖
1 year
been playing with the @huggingface mteb leaderboard () all day, super interesting dataset with very interesting correlation pattern across tasks. if you're good at one retrieval, you're good at all of them. other stuff? much less predictable
Tweet media one
2
9
45
@alex_peys
alex peysakhovich 🤖
4 months
gemini won’t draw me because im a stereotype apparently.
Tweet media one
Tweet media two
Tweet media three
1
5
46
@alex_peys
alex peysakhovich 🤖
1 year
this is the correct take. llm are the glue code that will allow us to put so many other technologies together. if you think of the llm as originating in machine translation this shouldn't be too surprising - they're exactly great for translating between many different modalities
@peteskomoroch
Pete Skomoroch
1 year
For people not paying close attention to AI right now, all the pieces are coming together at the same time to rapidly transform how we live and work. Here Stanford robotics researchers demonstrate GPT-4 control of a robot which can begrudgingly follow your spoken instructions:
Tweet media one
5
10
60
3
8
43
@alex_peys
alex peysakhovich 🤖
9 months
neural network architectures should just copy whatever corvid brains are doing
@TheDavidSJ
David Schneider-Joseph 🔍
9 months
@norabelrose There’s also at least some tasks on which performance scales linearly with log pallial neuron count.
Tweet media one
5
4
49
1
2
41
@alex_peys
alex peysakhovich 🤖
1 year
if i'm going to complain about companies not releasing their models/model data, i should give credit @MosaicML has a nice release of mpt with full transparency. have been playing with the instruct model and it's pretty impressive!
2
2
43
@alex_peys
alex peysakhovich 🤖
1 year
Tweet media one
1
6
42
@alex_peys
alex peysakhovich 🤖
10 months
current programming workflow: 1) ask chatgpt to write code to do X 2) remove comments from code and see if it runs 3) pass code back into chatgpt, ask to explain "to an idiot" what the code does if result of step 3 matches X, i assume it's right and move on to the next part
3
2
41
@alex_peys
alex peysakhovich 🤖
3 months
i loved doing my phd. when i saw the academic marker afterwards i noped out but the phd itself was super fun, i had great advisors (one of whom won a nobel during when i was his student and still met with me that week to discuss an experiment i was running) and learned a ton
@george_berry
george berry, kate martin fan
3 months
there's a lot of shit talking grad school on here but genuinely i loved grad school, it was amazing, and i am grateful to the @CornellSoc program for the opportunity
0
0
8
2
0
42
@alex_peys
alex peysakhovich 🤖
1 year
let's play the "when did the vision pro presentation start?" game
Tweet media one
1
1
40
@alex_peys
alex peysakhovich 🤖
8 months
awesome paper. contains great bangers like: "When we wonder whether the machine is sentient, the machine’s answers draw on the abundant science fiction material found in its training set."
0
13
37
@alex_peys
alex peysakhovich 🤖
2 months
this is how i feel abut state space models vs transformers
@paulgp
Paul Goldsmith-Pinkham
2 months
Tweet media one
9
15
143
0
3
38
@alex_peys
alex peysakhovich 🤖
3 months
man it's crazy how there are no more photographers since digital cameras and photoshop came along
6
3
36
@alex_peys
alex peysakhovich 🤖
1 year
the last 3-6 months in ai have really changed my baseline on a lot of things from "i kind of understand some parts of the world" to "i don't fucking know what's going to happen, predicting the future is impossible"
2
0
35
@alex_peys
alex peysakhovich 🤖
1 year
all problems in life can be solved if you realize that within every really big matrix hides a much smaller matrix that preserves most of the information
@rasbt
Sebastian Raschka
1 year
Thanks to parameter-efficient finetuning techniques, you can finetune a 7B LLM on a single GPU in 1-2 h using techniques like low-rank adaptation (LoRA). Just wrote a new article explaining how LoRA works & how to finetune a pretrained LLM like LLaMA:
29
297
2K
0
2
36
@alex_peys
alex peysakhovich 🤖
1 year
the fact that openai didn't release an april fools statement admitting that gpt3 was just 10,000 people in a call center is a bit disappointing
1
1
36
@alex_peys
alex peysakhovich 🤖
1 year
also if you do this with social science analyses files you can unpublish like 50% of all papers
1
2
36
@alex_peys
alex peysakhovich 🤖
1 year
given the way some of these detectors tend to work (look at whether something is highly likely under the model) it seems like any document that the model has memorized will trigger the detector as "AI generated"
@0xgaut
gaut
1 year
someone used an AI detector on the US Constitution and the results are concerning. Explain this, OpenAI!
Tweet media one
460
3K
36K
2
2
34
@alex_peys
alex peysakhovich 🤖
1 year
another for the “you should be suspicious if your machine learning is too good” pile
@SteveStuWill
Steve Stewart-Williams
1 year
Machine learning predicts hit songs from brain responses with 97% accuracy. Self-reported liking isn’t predictive. 😮
Tweet media one
40
163
1K
2
1
34
@alex_peys
alex peysakhovich 🤖
8 months
everyone who did substantial work on a paper should be an author, if you're not sure whether someone's contribution should count as substantial, you should err on the side of including them. doing otherwise is wrong. don't @ me you won't convince me otherwise
2
0
28
@alex_peys
alex peysakhovich 🤖
1 year
llm development went from "release paper with full details" to "release model evaluations but not training details" to "here are some videos" real quick
@AnthropicAI
Anthropic
1 year
Introducing 100K Context Windows! We’ve expanded Claude’s context window to 100,000 tokens of text, corresponding to around 75K words. Submit hundreds of pages of materials for Claude to digest and analyze. Conversations with Claude can go on for hours or days.
218
1K
5K
2
2
30
@alex_peys
alex peysakhovich 🤖
11 months
"computers can't do exploratory data analysis like people" is the new "computers can't play chess like people"
3
1
29
@alex_peys
alex peysakhovich 🤖
10 months
@cauchyfriend i don’t know what a class is in python and i worked on some of the most used internal and user facing things at fb for 9 years
2
0
27
@alex_peys
alex peysakhovich 🤖
6 months
playing in the text embedding space of sd turbo is pretty fun. this is just taking convex combination of 2 prompts
Tweet media one
1
1
25
@alex_peys
alex peysakhovich 🤖
4 months
this is not a randomized experiment. the much more likely story here is that twitter is better at *figuring out* which of two papers with similar abstracts/conference accepts will be important later, not that twitter *causes* it
@deliprao
Delip Rao e/σ
4 months
Crazy AF. Paper studies @_akhaliq and @arankomatsuzaki paper tweets and finds those papers get 2-3x higher citation counts than control. They are now influencers 😄 Whether you like it or not, the TikTokification of academia is here!
Tweet media one
64
285
2K
2
0
27
@alex_peys
alex peysakhovich 🤖
1 month
where are all these ai people getting time to go to meetups, make flashy videos, etc…? between cleaning data, writing code, and watching the training runs i don’t even have time to cherry pick outputs to post “we’re so back” threads on twitter
2
1
26
@alex_peys
alex peysakhovich 🤖
8 months
excited to finally drop a paper about an idea @adamlerer and i have been messing around with for a while tldr: in a simple qa task, re-sorting documents in llm context by attention it pays to them, and *then* generating improves accuracy a bunch 1/n
Tweet media one
1
6
27
@alex_peys
alex peysakhovich 🤖
1 year
oh academia reviewer: you studied X but really you should have studied Y, reject reply: yes Y is important, but X is also a thing with a huge literature (theory) + many companies trying to solve it (practice), is there something wrong with how we studied X? reviewer: no, reject
3
1
26
@alex_peys
alex peysakhovich 🤖
4 months
science naming conventions have changed a lot… physics in 50s: “we use a feynman parametrization to solve a model of heisenberg’s uncertainty principle and evaluate it on data from the oil drop experiment” modern ml: “we attach a yolo model to a llama backbone and then look at…
0
2
26
@alex_peys
alex peysakhovich 🤖
1 year
this is your daily reminder that pandas indexing was created in the lower levels of hell to torture people trying to do basic things like concatenate
3
0
26
@alex_peys
alex peysakhovich 🤖
10 months
@johnpdickerson weaknesses math is hard questions why math so hard? ethics review flags why make me read math? just waterboard me already
1
0
25
@alex_peys
alex peysakhovich 🤖
1 year
a real swe looking at what i just pushed
Tweet media one
2
3
26
@alex_peys
alex peysakhovich 🤖
1 year
fell into classic trap of spending 1 hour+ to automate something i could have done in 20 boring minutes
2
0
26
@alex_peys
alex peysakhovich 🤖
1 month
this board is a scam, why are there random ceos of oil and airplane companies, only a few legit scientists, and nobody from meta?
@AndrewCurran_
Andrew Curran
1 month
This morning the Department of Homeland Security announced the establishment of the Artificial Intelligence Safety and Security Board. The 22 inaugural members include Sam Altman, Dario Amodei, Jensen Huang, Satya Nadella, Sundar Pichai and many others.
Tweet media one
310
245
1K
1
0
25
@alex_peys
alex peysakhovich 🤖
11 months
tired: all embeddings are data compression wired: all data compression is an embedding
@goodside
Riley Goodside
11 months
this is wild — kNN using a gzip-based distance metric outperforms BERT and other neural methods for OOD sentence classification intuition: 2 texts similar if cat-ing one to the other barely increases gzip size no training, no tuning, no params — this is the entire algorithm:
Tweet media one
152
1K
7K
1
2
25
@alex_peys
alex peysakhovich 🤖
1 year
neat guide. hits one of my favorite metaphors: "statistics is like baking, you need to follow the recipe exactly, ML is like cooking, you need to constantly taste and adjust spices"
@AIatMeta
AI at Meta
1 year
Self-supervised learning underpins today’s cutting-edge work across natural language, computer vision & more — but it’s an intricate art with high barriers to entry. Today we're releasing the SSL Cookbook, a practical guide for navigating SSL + contributing to this space ⬇️
29
168
711
2
7
24
@alex_peys
alex peysakhovich 🤖
1 year
many people in ai want machines that have “general” intelligence - i don’t care (or believe that will happen) - i want dumb machines that free people from doing the boring, repetitive, and/or dangerous tasks that take away time from actual interesting pursuits
1
3
24
@alex_peys
alex peysakhovich 🤖
1 year
so i pasted the first half of an analysis script for one of my papers into gpt and asked "what is the person writing this trying to do?" 🧵 1/n
Tweet media one
1
2
23
@alex_peys
alex peysakhovich 🤖
1 year
@kchonyc i too like to indulge in some matrix multiplication
0
0
21
@alex_peys
alex peysakhovich 🤖
1 year
ugh these econometricians, everyone knows that the more layers your neural network has the more causal it is.
@instrumenthull
Peter Hull (Parody)
1 year
This is a common misconception I see a lot in my intro econometrics class. To detect causality in regressions you actually need to look at the *adjusted* R-squared, since the regular R-squared always increases with more controls. Hope this helps!
48
23
450
2
0
23
@alex_peys
alex peysakhovich 🤖
2 months
if you want to understand ml/ai, you can't do it by reading papers. being one with the matrix requires the terrible grind of building models to do stuff, getting frustrated when they don't work, working in the data mines, etc...
3
2
23
@alex_peys
alex peysakhovich 🤖
1 year
random projections for dimensionality reduction are bullshit and shouldn't work but they do
6
1
22
@alex_peys
alex peysakhovich 🤖
11 months
now i do ai and my day is mostly pasting pytorch errors into chatgpt
0
0
22
@alex_peys
alex peysakhovich 🤖
10 months
@Apoorva__Lal im actually curious now: is there a fundamentally easy way to estimate size of null space from appropriately chosen observations (x, Ax)? @ben_golub nerd snipe here
0
0
21
@alex_peys
alex peysakhovich 🤖
11 months
cs researchers: why are people sending so many papers to conferences these days? we really should take actions to increase the signal / noise ratio also cs researchers: this grad student went to the bathroom and didn’t come out with a neurips paper, i dunno…
@linylinx
Tianlin
11 months
Is this the *minimum* requirement for a new grad in machine learning now? #NVIDIA
Tweet media one
136
354
3K
0
1
22
@alex_peys
alex peysakhovich 🤖
1 year
i have a theory that zuck's metaverse obsession was always just a long, petty, con to get apple to do something dumb
0
1
21
@alex_peys
alex peysakhovich 🤖
10 months
@giffmana @_basilM neat! that equation would be SO hard to read without that
0
0
20
@alex_peys
alex peysakhovich 🤖
9 months
i did 1 hike on the ca coastline and came up with at least 5 research things i want to try, time off is really good for the brain
Tweet media one
3
0
21
@alex_peys
alex peysakhovich 🤖
1 year
a great way to see a major weakness of current llms is to take a task it can do an easy version of (sort these 2 words into alphabetical order) and watch it fail on a complicated version that could, in principle, be done by repeated application of the easy thing.
Tweet media one
4
2
21
@alex_peys
alex peysakhovich 🤖
6 months
the impossible text to image prompt (images from DALLE, MJ v6, SDXL respectively)
Tweet media one
Tweet media two
Tweet media three
2
1
21
@alex_peys
alex peysakhovich 🤖
7 months
best documentary ive ever seen
Tweet media one
@growing_daniel
Daniel
7 months
you gotta be kidding me
Tweet media one
70
45
1K
0
1
21
@alex_peys
alex peysakhovich 🤖
11 months
come join tech where nobody cares if you just type in all lower case without punctuation or grammar
@Andrew_Akbashev
Andrew Akbashev
11 months
In your application letter for #PhD / postdoc, NEVER ever say: "Hi prof" "Hello" "Dear Professor" "Greetings of the day" If you do, your email will be immediately deleted by 99% of professors. ▫️ Only start your applications with “Dear Prof. [second_name],” And don’t…
289
337
3K
1
0
21
@alex_peys
alex peysakhovich 🤖
10 months
the main parallel to be drawn between oppenheimer-era physics and today's ai is that you succeed when you reduce your problem to matrix multiplication
2
2
19
@alex_peys
alex peysakhovich 🤖
2 months
ah yes stripe, the company famously founded, incubated, and grown in *checks notes* san francisco, france
@McGuinnessEU
Mairead McGuinness
2 months
This morning I met with @collision , President of @stripe – a European success story in payments. Great conversation about the international payments landscape, how we tackle cyber-risks and fraud, and the role of financial education.
Tweet media one
182
14
305
0
0
19
@alex_peys
alex peysakhovich 🤖
10 months
symbolic reasoning, 'i dunno just predict the next word of the internet', and bayesian methods
Tweet media one
2
2
20
@alex_peys
alex peysakhovich 🤖
5 months
academics: oh no you copied a short description of something without citing the author, that's plagiarism programmers: gotta write some code, ok let's find something someone else did and copy paste as much of that as we can
1
3
20
@alex_peys
alex peysakhovich 🤖
2 months
omg claude is now doing the "i don't wanna give you all the code" thing... please @AnthropicAI don't do this
Tweet media one
1
1
20
@alex_peys
alex peysakhovich 🤖
9 months
this is just a chart of how long you wait for the server at the restaurant
@RenaudFoucart
Renaud Foucart
9 months
Which country has the best food? A revealed preference approach.
Tweet media one
112
186
765
1
2
19
@alex_peys
alex peysakhovich 🤖
10 months
this video is amazing, a VERY pretty explanation of “why the hell is there a pi in the Gaussian distribution?”
@mayara_pfs
Mayara Felix
10 months
‘Metrics friends: I had never truly appreciated why pi shows up in the normal distribution’s pdf until just now. Did you folks know this all along?! It’s so beautiful! Learned it from this 3Blue1Brown animation 🥹:
7
42
333
0
2
19
@alex_peys
alex peysakhovich 🤖
2 months
@ben_golub its just one of many ways of assigning the total value (predictive accuracy) to members of a coalition (input features) when the value function is complex. it happens to be one that's relatively easy to compute. it's not that dumb, but its also only useful in a relatively small %…
1
0
19
@alex_peys
alex peysakhovich 🤖
1 year
in the long run, everything becomes ggplot
@paulgp
Paul Goldsmith-Pinkham
1 year
End of an era
Tweet media one
Tweet media two
27
122
1K
1
1
19
@alex_peys
alex peysakhovich 🤖
1 year
smart is overrated (and not just in econ)
@1ArmedEconomist
James Bailey
1 year
Just ran across this hilarious essay by D. McCloskey about economist compliments. Feel like I should have heard of this before now:
Tweet media one
4
11
64
0
1
18
@alex_peys
alex peysakhovich 🤖
9 months
python is simultaneously the biggest accelerator of and the biggest hurdle to progress in machine learning
@cognitivecompai
Cognitive Computations
9 months
llama-cpp-python requires pydantic 2.0.1, explicitly won't work with <=2.0 fastapi and chromadb requires pydantic 1.9.2, explicitly won't work with >=2.0 oh, dear... @abetlen
Tweet media one
30
14
166
2
2
18