Ranjay Krishna Profile Banner
Ranjay Krishna Profile
Ranjay Krishna

@RanjayKrishna

Followers
4,947
Following
420
Media
190
Statuses
1,769

I teach machines to see and interact with people. + Assistant Professor @UWcse - Prev. Research scientist @MetaAI - PhD @StanfordAILab - Instructor @Stanford

California, USA
Joined August 2011
Don't wanna be here? Send us removal request.
Explore trending content on Musk Viewer
@RanjayKrishna
Ranjay Krishna
3 years
I successfully defended my PhD a few days ago. Huge thanks to my amazing advisors @drfeifei and @msbernst for supporting me throughout my journey.
@drfeifei
Fei-Fei Li
3 years
A hearty congratulations to my student @RanjayKrishna (co-advised by @msbernst ) for a successful PhD thesis defense! Great pioneering work in combining human cognition, human-computer interaction and #AI ! Thank you PhD committee members @chrmanning @syeung10 @magrawala 🌹
Tweet media one
Tweet media two
Tweet media three
Tweet media four
12
23
483
24
7
319
@RanjayKrishna
Ranjay Krishna
3 years
🎓 I'm on the faculty job market this year! Please send me a message if your department (or one you know) is interested in a Computer Vision / HCI researcher who designs models inspired by human perception and social interaction! My application materials:
15
20
269
@RanjayKrishna
Ranjay Krishna
8 months
Our submission received my first ever 10/10 review from NeurIPS. Check out our #NeurIPS2023 Oral. We release the largest vision-language dataset for histopathology and train a SOTA model for classifying histopathology images across 13 benchmarks across 8 sub-pathologies.
@mehsaygin
Mehmet Saygın Seyfioğlu
8 months
Quilt-1M has been accepted for an oral presentation at @NeurIPSConf . As promised, we have also released our data and our model: See you all in New Orleans!
0
15
69
2
26
206
@RanjayKrishna
Ranjay Krishna
6 years
My latest #CVPR2018 paper with Ines, @msbernst and @drfeifei is now live with fully documented open source training/testing code. We treat visual relationships as shifts in attention and perform attention saccades around scene graphs.
Tweet media one
4
62
185
@RanjayKrishna
Ranjay Krishna
4 years
We are happy to introduce Action Genome: a new representation, new dataset, and new model for decomposing actions into spatio-temporal scene graphs. Action Genome has 1.7M relationships between 0.4M object instances and enables few-shot action prediction.
4
50
194
@RanjayKrishna
Ranjay Krishna
5 years
I expect a future where ML agents will dynamically learn through real-world interactions with people. My #CVPR2019 paper with @msbernst and @drfeifei pushes us towards that goal by learning to pose directed questions to learn about the visual world.
Tweet media one
3
38
179
@RanjayKrishna
Ranjay Krishna
5 years
Announcing the first 𝗜𝗖𝗖𝗩 𝘄𝗼𝗿𝗸𝘀𝗵𝗼𝗽 𝗼𝗻 𝗦𝗰𝗲𝗻𝗲 𝗚𝗿𝗮𝗽𝗵 𝗥𝗲𝗽𝗿𝗲𝘀𝗲𝗻𝘁𝗮𝘁𝗶𝗼𝗻 𝗮𝗻𝗱 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴. If your research involves structured data or graph-based learning, consider submitting to us by August 15, 2019:
Tweet media one
4
42
160
@RanjayKrishna
Ranjay Krishna
10 months
Our new paper finds something quite neat: We easily scale up how many tools LLMs can use to over 200 tools (APIs, models, python functions, etc.) ...without any training, without a single tool-use demonstration!!
@_akhaliq
AK
10 months
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language Models paper page: Today, large language models (LLMs) are taught to use new tools by providing a few demonstrations of the tool's usage. Unfortunately, demonstrations are hard to
Tweet media one
2
86
361
9
27
162
@RanjayKrishna
Ranjay Krishna
5 years
On my way to Seoul for #ICCV2019 . If you’re at the conference on October 28th, come check out a full day workshop I am organizing on Scene Graph Representation and Learning (). We have a great lineup of speakers and posters.
4
16
130
@RanjayKrishna
Ranjay Krishna
3 years
Academic quarter recap: here's a staff photo after the last lecture of @cs231n . It's crazy that we were the largest course at Stanford this quarter. This year, we added new lectures and assignments (open sourced) on attention, transformers, and self-supervised learning.
Tweet media one
2
2
128
@RanjayKrishna
Ranjay Krishna
6 years
Someone made an in-depth video of our recent work at #CVPR2018 on Referring Relationships. If you are interested in how we train models to disambiguate between different people or objects in images, go check it out. #ComputerVision #MachineLearning
1
40
110
@RanjayKrishna
Ranjay Krishna
1 year
Deploying LLMs continues to be a challenge as they grow in model size and consume more data. We introduce a simple distillation mechanism to make even 770M T5 models outperform 540B PaLM. Led by my PhD student @cydhsieh and with collaborators @chunliang_tw @ajratner
@_akhaliq
AK
1 year
Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes reduce both the model size and the amount of data required to outperform LLMs; our 770M T5 model outperforms the 540B PaLM model using only 80% of available data on a
Tweet media one
5
117
585
0
23
104
@RanjayKrishna
Ranjay Krishna
7 years
NEW dataset, NEW task, NEW model for dense video captioning. Work done with @kenjihata , @drfeifei and @jcniebles .
1
43
90
@RanjayKrishna
Ranjay Krishna
4 years
If you're releasing a new user-facing AI project/product, you might want to read our new #CSCW2020 paper. We find that words or metaphors used to describe AI agents have a causal effect on users' intention to adopt your agent. Thread👇
2
18
94
@RanjayKrishna
Ranjay Krishna
10 months
Human-centered AI is no longer just a buzzword. It's a thriving, growing area of research. Come to our workshop tomorrow at #ICML2023 to learn about it. AI models have finally matured for mass market use. HCI+AI interactions will only become more vital.
1
11
85
@RanjayKrishna
Ranjay Krishna
7 years
Award for the most creative and least informative poster at #cvpr2017 YOLO9000: better, faster, stronger!!
Tweet media one
1
13
62
@RanjayKrishna
Ranjay Krishna
6 years
Congrats to Amir Zamir @zamir_ar , Silvio Savarese @silviocinguetta and co-authors for their Best paper award at #CVPR2018 “Taskonomy: Disentangling Task Transfer Learning”
Tweet media one
Tweet media two
0
10
61
@RanjayKrishna
Ranjay Krishna
5 years
We updated our generative human evaluation benchmark with 6 GANs, 4 image datasets (generating faces and objects), 2 sampling methods. We show statistically insignificant correlation with FID and other automatic metrics. Use HYPE ()!
@RanjayKrishna
Ranjay Krishna
5 years
Measuring progress in generative models is akin to hill climbing on noise. Automatic metrics are heuristics and human evaluations unreliable. Our latest paper presents a new human evaluation grounded in psychophysics, consistent, turnkey and cost-effective
Tweet media one
3
20
57
1
18
56
@RanjayKrishna
Ranjay Krishna
6 months
📈New paper! 1 training method, no new architecture, no additional data, SOTA results on 8 vision-language benchmarks. Our 5B model variants even outperform 13B+ models!
@huyushi98
Yushi Hu
6 months
Multimodal reasoning is hard. Even the best LMMs struggle with counting😥 Any fix for it? Introduce VPD from @GoogleAI : we teach LMMs multimodal CoT reasoning with data synthesized from LLM + vision tools, and achieve new SOTAs on many multimodal tasks!🥳
Tweet media one
7
69
264
1
10
62
@RanjayKrishna
Ranjay Krishna
5 years
Measuring progress in generative models is akin to hill climbing on noise. Automatic metrics are heuristics and human evaluations unreliable. Our latest paper presents a new human evaluation grounded in psychophysics, consistent, turnkey and cost-effective
Tweet media one
3
20
57
@RanjayKrishna
Ranjay Krishna
9 months
With the ICCV ban finally lifted, here is our new #ICCV2023 paper, which already has a few follow up papers. Our method faithfully evaluates text-to-image generation models. It provides more than just a score; it identifies missed objects, incorrect attributes, relationships, etc
@huyushi98
Yushi Hu
9 months
It is notoriously hard to evaluate images created by text-to-image models. Why not using the powerful LLMs and VLMs to analyze them? We introduce TIFA🦸🏻‍♀️ #ICCV2023 , which uses GPT + BLIP to quantitatively measure what Stable Diffusion struggles on! Proj:
Tweet media one
2
24
86
0
8
58
@RanjayKrishna
Ranjay Krishna
10 months
New paper: Real-world image editing that is consistent with lighting, occlusion, and 3D shapes💀🧊🎧⚽️🍎! We introduce a new 3D image editing benchmark called OBJect. Using OBJect, we train 3DIT, a diffusion model that can rotate, translate, insert, and delete objects in images.
@tanmay2099
Tanmay Gupta
10 months
Imagine a 2D image serving as a window to a 3D world that you could reach into, manipulate objects, and see changes reflected in the image. In our new OBJect 3DIT work, we edit images in this 3D-aware fashion while only operating in the pixel space! 🧵
Tweet media one
1
35
131
0
8
48
@RanjayKrishna
Ranjay Krishna
5 years
Structured prediction requires substantial training data. Our new paper introduces the first few-shot scene graph model with predicates as functions within a graph convolution framework, resulting in the first semantically & spatially interpretable model.
Tweet media one
4
12
41
@RanjayKrishna
Ranjay Krishna
4 years
And they say virtual conferences can’t be fun! Check out our moves! #vHLF20
4
9
43
@RanjayKrishna
Ranjay Krishna
11 months
New paper! Now that self-supervision for high-level vision tasks have matured, we ask what is needed for pixel-level tasks? Given cog.sci. evidence, we show that scaling up learning multi-view correspondences improves SOTA on depth, segmentation, normals, and pose estimation.
@_akhaliq
AK
11 months
MIMIC: Masked Image Modeling with Image Correspondences paper page: Many pixelwise dense prediction tasks-depth estimation and semantic segmentation in computer vision today rely on pretrained image representations. Therefore, curating effective
Tweet media one
1
22
85
1
7
42
@RanjayKrishna
Ranjay Krishna
5 years
For researchers working on scene graphs or visual relationships, I just open sourced a simple library to easily visualize #SceneGraphs . Now you can directly use this to generate your qualitative results in your publications.
Tweet media one
Tweet media two
1
4
42
@RanjayKrishna
Ranjay Krishna
7 months
In 1938, Don Budge won all the Tennis Grand Slams in one calendar year. Now, in 2023, my advisor Michael Bernstein has done the same in HCI research. Congrats @msbernst !!
@StanfordHCI
Stanford Human-Computer Interaction Group
7 months
Our very own @msbernst won a best paper award in all CHI, CSCW, and UIST this year! Congratulations!!!
1
8
101
2
6
41
@RanjayKrishna
Ranjay Krishna
7 years
@soumithchintala why does the pytorch logo look so much like the tinder logo?
Tweet media one
4
4
38
@RanjayKrishna
Ranjay Krishna
4 years
Hey everyone, we have a great lineup of speakers at our upcoming workshop on the importance of Compositionality in Computer Vision (), at #CVPR2020 (with @eadeli , @jcniebles , @drfeifei , @orussakovsky ). Consider submitting a paper. Also, stay safe.
Tweet media one
0
4
36
@RanjayKrishna
Ranjay Krishna
6 years
@jebbery And then when you start your job: First task: What is 26 + 99? Me: It's 19. #Overfitting
1
2
33
@RanjayKrishna
Ranjay Krishna
5 years
Really proud of my summer intern, Pranav Khadpe, from IIT Kharagpur. He spent the summer with us at Stanford working on how how we can impact usability of AI systems even before people interface with the system.
Tweet media one
0
0
34
@RanjayKrishna
Ranjay Krishna
4 years
If you are attending #CVPR2020 , we have some exciting things for you to attend and check out: 1) Come to our (w @drfeifei @jcniebles @eadeli Jingwei) workshop on Sunday on Compositionality in Computer Vision. We have an amazing line up of speakers.
1
1
30
@RanjayKrishna
Ranjay Krishna
7 years
My talk at #ECCV2016 in Amsterdam is now available online. You can check out the paper here:
1
5
24
@RanjayKrishna
Ranjay Krishna
7 years
Our work on dense #video #captioning was featured in #techcrunch . Collaborators - @drfeifei , @kenjihata @jcniebles
2
8
32
@RanjayKrishna
Ranjay Krishna
9 months
Training robots requires data—which today is hard to collect. You need (1) expensive robots, (2) teach people to operate them, (3) purchase objects for the robots to manipulate. Our #CoRL2023 paper shows you don't need any of the 3, not even a robot! All you need is an iPhone.
@DJiafei
Jiafei Duan
9 months
🚨Is it possible to devise an intuitive approach for crowdsourcing trainable data for robots without requiring a physical robot🤖? Can we democratize robot learning for all?🧑‍🤝‍🧑 Check out our latest #CoRL2023 paper-> AR2-D2: Training a Robot Without a Robot
3
16
128
0
1
32
@RanjayKrishna
Ranjay Krishna
8 years
Visual Genome paper has now been released. Project advised by @drfeifei @msbernst @ayman @lijiali_vision
2
16
27
@RanjayKrishna
Ranjay Krishna
11 months
At #CVPR2023 this year, I had a number of conversations about how we need a faithful benchmark for measuring vision-language compositionality. SugarCrepe is our response. Our best models are still not compositional. It's time to make some progress!
@cydhsieh
Cheng-Yu Hsieh
11 months
Introducing SugarCrepe: A benchmark for faithful vision-language compositionality evaluation! ‼️ Current compositional image2text benchmarks are HACKABLE: Blind models without image access outperform SOTA CLIP models due to severe dataset artifacts 📜:
Tweet media one
1
20
92
0
3
30
@RanjayKrishna
Ranjay Krishna
7 years
Just redesigned and will be teaching a fun new course on #computervision with @jcniebles @Stanford . Go check it out:
Tweet media one
3
7
29
@RanjayKrishna
Ranjay Krishna
5 years
Funniest quote from #iccv2019 : “strongly supervised learning is the opium of machine learning and now we are all hooked on it”
0
2
28
@RanjayKrishna
Ranjay Krishna
7 years
@CloudinAround @joshmeyerphd @AndrewYNg Maybe you should read about what the problem really is before commenting. The waves tuition will be taxable under the new bill - making out PhD unaffordable.
0
0
26
@RanjayKrishna
Ranjay Krishna
7 years
Today we open sourced all 9 assignments for the #ComputerVision class I teach @Stanford with @jcniebles - allowing everyone to learn various concepts like lane detection, deformable parts, segmentation, dimensionality reduction, optical flow, etc.
1
10
26
@RanjayKrishna
Ranjay Krishna
11 months
There are so many vision-language models: OpenAI’s CLIP, Meta’s FLAVA, Salesforce’s ALBEF, etc. Our #CVPR2023 ⭐️ highlight ⭐️ paper finds that none of them show sufficient compositional reasoning capacity. Since perception and language are both compositional, we have work to do
@zixianma02
Zixian Ma
11 months
Have vision-language models achieved human-level compositional reasoning? Our research suggests: not quite yet. We’re excited to present CREPE – a large-scale Compositional REPresentation Evaluation benchmark for vision-language models – as a 🌟highlight🌟at #CVPR2023 . 🧵1/7
3
14
69
0
1
27
@RanjayKrishna
Ranjay Krishna
5 years
#ICCV2019 is huge!!!
Tweet media one
Tweet media two
0
1
26
@RanjayKrishna
Ranjay Krishna
3 years
Soon we will be releasing over 200 computer vision student group projects, on topics ranging from autonomous driving, denoising chest x-rays, understanding satellite images, colorizing old movies, estimating real-estate price, transfer learning on edge devices, etc
1
5
27
@RanjayKrishna
Ranjay Krishna
4 years
New paper out! Typical active learning algorithms assume there is only one correct answer, which is not true for many tasks, like question answering. Our new uncertainty measurement is 5x more data-efficient even when there are multiple correct answers.
1
3
23
@RanjayKrishna
Ranjay Krishna
5 years
Check out this new workshop and benchmark for studying vision systems that can navigate as social agents amongst people -- by my colleagues ( @SHamidRezatofig ) at @StanfordSVL .
0
5
24
@RanjayKrishna
Ranjay Krishna
3 years
@fchollet It very much depends on the act function but for most cases, you want to use conv-bn-act. Without bn before act, saturated neurons will kill gradients. We do case studies of this across multiple activation functions in these slides:
2
0
25
@RanjayKrishna
Ranjay Krishna
5 years
Extremely proud of my lab mates for winning best paper award at #ICRA2019 on their work on self-supervised learning that combines vision and touch.
@animesh_garg
Animesh Garg
5 years
Tweet media one
11
11
148
0
3
22
@RanjayKrishna
Ranjay Krishna
5 years
Congratulations to my advisor Fei-Fei Li ( @drfeifei ) for being elected as an ACM Fellow.
2
0
25
@RanjayKrishna
Ranjay Krishna
6 months
New paper: DreamSync improves any text-to-image generation model by aligning it better with text inputs. We use DreamSync to improve stable diffusion XL.
@sunjiao123sun_
Jiao Sun
6 months
Generated images not following your prompt? Introducing 𝔻𝕣𝕖𝕒𝕞𝕊𝕪𝕟𝕔 from @GoogleAI : improving alignment + aesthetics of image generation models with feedback from VLMs! ✅ Model Agnostic ✅ Plug and Play ❌ RL ❌ Human Annotation ❌ Real Image
Tweet media one
5
70
323
0
2
25
@RanjayKrishna
Ranjay Krishna
5 years
Our paper has been accepted at #ICCV2019 . Come check us out in Seoul later this year. In the meantime, we are planning on releasing code soon.
@vincentsunnchen
vincent sunn chen
5 years
Structured prediction requires large training sets, but crowdsourcing is ineffective— so, existing models ignore visual relationships without sufficient labels. Our method uses 10 relationship labels to generate training data for any scene graph model!
Tweet media one
2
20
66
2
0
23
@RanjayKrishna
Ranjay Krishna
6 years
I will be speaking at an event tomorrow at @Stanford on the importance of #Trust and #Transparency in Human-AI collaboration. Come stop by to hear about how we can build dynamic learning systems that can continuously learn from interactions with humans.
Tweet media one
3
5
23
@RanjayKrishna
Ranjay Krishna
8 years
Upcoming at CSCW2017: Analyzing longterm crowdworkers using 8.89M tasks from 6K workers with @drfeifei & @msbernst .
0
8
18
@RanjayKrishna
Ranjay Krishna
8 years
Upcoming at #ECCV2016 as an oral: Detecting relationships between objects in images (with @msbernst & @drfeifei ).
0
6
21
@RanjayKrishna
Ranjay Krishna
2 years
Speaking of collection behavior, check out our new paper at NeurIPS 2022. Inspired by how animals coordinate to accomplish tasks, we design a simple multi-agent intrinsic reward that allows decentralized multi-agent training, allowing AI agents to even adapt to new partners.
@elonmusk
Elon Musk
2 years
Because it consists of billions of bidirectional interactions per day, Twitter can be thought of as a collective, cybernetic super-intelligence
22K
13K
202K
1
2
21
@RanjayKrishna
Ranjay Krishna
6 months
This new project is a huge team effort from the PRIOR team at AI2 with striking conclusions: Real-world navigation, exploration, manipulation emerges: (1) without any RL, (2) without any human demonstrations, (3) with only automatically generated simulation data.
@ehsanik
Kiana Ehsani
6 months
🚀 Imitating shortest paths in simulation enables effective navigation and manipulation in the real world. Our findings fly in the face of conventional wisdom! This is a big joint effort from PRIOR @allen_ai (6 first authors!).
6
24
162
0
1
19
@RanjayKrishna
Ranjay Krishna
5 years
Lots of first time #cvpr2019 attendees from Asia who code in python and studied computer science
Tweet media one
0
2
20
@RanjayKrishna
Ranjay Krishna
6 years
Blog post explaining the math behind all the generative adversarial network papers #GAN . It's a fun short read.
0
4
19
@RanjayKrishna
Ranjay Krishna
7 years
Google translates Turkish gender neutral "O bir doktor. O bir hems¸ire." to "He is a doctor. She is a nurse." #bias
2
10
17
@RanjayKrishna
Ranjay Krishna
7 years
Joined a #AI and #deeplearning group on Facebook. Starting to realize that the public has no idea what #ArtificialIntelligence actualy is.
Tweet media one
4
1
15
@RanjayKrishna
Ranjay Krishna
3 years
Check out our research work on how to design chatbots that people want to adopt!!
@Stanford
Stanford University
3 years
Consumers have consistent personality preferences for their online friends, new @StanfordHAI research shows.
2
14
50
0
1
16
@RanjayKrishna
Ranjay Krishna
1 year
Old-school research presentations can be boring. Check out this fun creativity skit Helena put together to explain our new CSCW paper. TLDR: Recent works keep finding that AI explanations don't help people make better decisions. We propose a theory for when they do help!
@HelenasResearch
Helena Vasconcelos' Research
1 year
Do you want to learn about how explanations can help reduce overreliance on AIs? Watch this fantastic, out-of-this-world, one-of-a-kind, spectacular, etc. short video explaining our work! We put a lot of ❤️ into it and would appreciate the views.
1
12
61
0
2
16
@RanjayKrishna
Ranjay Krishna
8 months
If you’re at #ICCV2023 , reach out and come say hi. I will be giving two talks: - one at the closing the loop in vision and language workshop: - one at the scene graph workshop:
1
0
16
@RanjayKrishna
Ranjay Krishna
7 years
Spotted on a whiteboard at #stanford 's #computerscience department
Tweet media one
2
0
13
@RanjayKrishna
Ranjay Krishna
9 years
One of the best #data webpages I have ever seen http://t.co/ovgUGRKF2G
1
6
14
@RanjayKrishna
Ranjay Krishna
3 years
On a personal note, I am going to miss co-instructing with @danfei_xu and @drfeifei . This is my 5th and last time instructing at Stanford. I have learned so much from working with so many amazing teaching assistants and students. Thank you, everyone.
1
0
16
@RanjayKrishna
Ranjay Krishna
6 years
"We’re going to have robots with free will, absolutely. We have to understand how to program them and what we gain out of it. For some reason, evolution has found this sensation of free will to be computationally desirable." - Judea Pearl
0
3
14
@RanjayKrishna
Ranjay Krishna
7 years
Training LSTMs? Moral of the story: DROPOUT EVERYTHING! embeddings->dropout, activations->dropout, inputs->dropout,
1
2
13
@RanjayKrishna
Ranjay Krishna
4 years
Contrary to how today's AI products are advertised, people are more likely to adopt an agent that they originally expected to have low competence but outperforms that expectation. They are less forgiving of mistakes made by agents they expect to have high competence.
Tweet media one
1
6
14
@RanjayKrishna
Ranjay Krishna
3 years
@chipro It's subjective. I would personally say scaling up models *IS* academic research. It's easy to dismiss it as not innovative. But research is also about studying the outcomes of design decisions/interventions. In this case, the intervention is increasing model size.
0
0
14
@RanjayKrishna
Ranjay Krishna
4 years
My superstar labmate is going on the job market!!
@danfei_xu
Danfei Xu
4 years
Greetings Twitterverse! Excited to share that I'm going on the academic job market this year! Check out my research at
7
22
157
0
0
15
@RanjayKrishna
Ranjay Krishna
3 years
I have been extremely lucky to have @timnitGebru as a labmate and as a friend. Thank you for sharing your brilliant work and always being generous with your precious time. I am appalled you are dealing with this. I am here to support and help in any way I can.
I was fired by @JeffDean for my email to Brain women and Allies. My corp account has been cutoff. So I've been immediately fired :-)
390
1K
7K
0
0
14
@RanjayKrishna
Ranjay Krishna
6 years
We can localize and over the "jacket worn by the person next to the person on the phone" or the "table below the person to the left of the person wearing the hat".
Tweet media one
3
3
12
@RanjayKrishna
Ranjay Krishna
6 years
#CVPR2018 results just came out!! There doesn't appear to be any correlation between paper ID and whether your paper will get accepted, unlike past vision conferences.
Tweet media one
0
7
13
@RanjayKrishna
Ranjay Krishna
7 years
Check out our latest work on Generating Descriptive Image Paragraphs with @jkrause314 and @drfeifei
Tweet media one
0
4
13
@RanjayKrishna
Ranjay Krishna
3 years
Our students were more than just computer science majors. We housed majors from immunology, anthropology, MBA, biology, geology, aeronautics, music, neuroscience, philosophy, and many more. We also had over 50 industry professionals remotely enroll.
1
1
14
@RanjayKrishna
Ranjay Krishna
6 years
Here is a neat visualization exploring motifs in the visual world using relationships from @VisualGenome . Adding structure allows us to further vision research and ask questions like: "what kinds of objects usually contain food?→bowls, plates, table"
1
0
14
@RanjayKrishna
Ranjay Krishna
7 years
Kaiming He just won his 4th best paper award!!! Go check out Mask R-CNN.
Tweet media one
0
2
13
@RanjayKrishna
Ranjay Krishna
7 years
Men also like shopping! #EMNLP2017 best paper reduces gender bias in visual recognition using Lagrangian relaxation.
Tweet media one
2
3
12
@RanjayKrishna
Ranjay Krishna
10 months
We have an amazing group of speakers lined up: Nikola Banovic @nikola_banovic Anca Dragan @ancadianadragan James Landay @landay Q. Vera Liao @QVeraLiao Meredith Ringel Morris @merrierm Chenhao Tan @ChenhaoTan
0
1
13
@RanjayKrishna
Ranjay Krishna
5 years
Check out our newest paper! We automatically assign probabilistic relationship labels to images and can use them to train any existing scene graph model with as few as 10 examples.
@vincentsunnchen
vincent sunn chen
5 years
Structured prediction requires large training sets, but crowdsourcing is ineffective— so, existing models ignore visual relationships without sufficient labels. Our method uses 10 relationship labels to generate training data for any scene graph model!
Tweet media one
2
20
66
1
0
13
@RanjayKrishna
Ranjay Krishna
4 years
Congratulations @PranavKhadpe !!! Contrary to how today's AI products are marketed, our paper finds that people are more likely to adopt and cooperate with AI agents that project low competence but outperforms expectations and are less forgiving when they project high competence.
@PranavKhadpe
Pranav Khadpe
4 years
Excited to share that our paper, "Conceptual Metaphors Impact Perceptions of Human-AI Collaboration", was awarded an Honorable Mention at #CSCW2020 🎇 Paper: Want to say a huge thank you to my co-authors @RanjayKrishna @drfeifei @jeffhancock and @msbernst
7
3
131
0
2
13
@RanjayKrishna
Ranjay Krishna
8 years
If you are the anonymous reviewer who wrote a 12 page review for my @VisualGenome paper, I just wanna say that you're awesome. #bestReviewer
1
0
12
@RanjayKrishna
Ranjay Krishna
7 years
NVIDIA just unveiled the new TESLA V100!!!
Tweet media one
0
1
11
@RanjayKrishna
Ranjay Krishna
5 months
Embodied AI has been limited to simple lifeless simulated houses for many years. Just like the Holodeck in the Star Trek episodes I grew up watching, our Holodeck system allows you to create diverse lived in 3D simulated environments populated with thousands of objects:
@YueYangAI
Yue Yang
5 months
🛸 Announce Holodeck, a promptable system that can generate diverse, customized, and interactive 3D simulated environments ready for Embodied AI 🤖 applications. Website: Paper: Code: #GenerativeAI [1/8]
8
101
404
0
0
21
@RanjayKrishna
Ranjay Krishna
5 years
Had a great time presenting some of my latest research in NYC.
@BrownInstitute
The Brown Institute
5 years
Engagement learning to train an SI system. Generating questions and engaging users to learn models in an automated way. “We have one example of an image of a red panda and need to scale up to recognize red pandas consistently.”
Tweet media one
Tweet media two
0
1
6
0
1
11
@RanjayKrishna
Ranjay Krishna
6 years
Its refreshing to see corporations using their strengths/capabilities to give back to their society. @DoorDash just launched Project Dash () to use their network of restaurants and their suite of cars to deliver food to those who are hungry. #dashforgood
2
4
10
@RanjayKrishna
Ranjay Krishna
6 years
Apparently, all of the research Google is doing is some kind of AI now. What happened to systems? social sciences? HCI? UX research? INNOVATION != AI
@GoogleAI
Google AI
6 years
Hmm, have I made a wrong turn? I was looking for @GoogleResearch … Nope, you're in the right place! We’re unifying all of our research efforts under “Google AI”, which encompasses all the state-of-the-art innovation happening across Google. Learn more at
29
349
996
1
2
11
@RanjayKrishna
Ranjay Krishna
6 years
Fun paper on repurposing emojis for personal communication. #CHI2018
Tweet media one
0
1
11
@RanjayKrishna
Ranjay Krishna
5 years
#CVPR2019 drops their reviews like @Beyonce drops her albums... without notice!
0
1
11
@RanjayKrishna
Ranjay Krishna
6 months
Very excited to see more of Ross!
@anikembhavi
Ani Kembhavi
6 months
Incredibly excited to announce that Ross Girshick ( @inkynumbers ) will be joining the PRIOR team @allen_ai ! Ross is one of the most influential and impactful researchers in AI. I'm so honored that he is joining us, and I'm really looking forward to working with him.
6
22
304
0
0
11
@RanjayKrishna
Ranjay Krishna
6 years
@christianrudder I hope you go back and realize how uncomfortable you made people feel at #chi2018 . Your talk was a heteronormative disaster.
0
2
10
@RanjayKrishna
Ranjay Krishna
4 years
Go check out our #NeurIPS2019 *Oral* talk for "HYPE: Human eYe Perceptual Evaluation of Generative Models" today at 4:50pm at West Exhibition Hall C + B3. Also, HYPE now offers *evaluation as a service* for generative models at
0
2
10
@RanjayKrishna
Ranjay Krishna
6 years
Reviewing an #ECCV2018 paper and apparently this paper beats its baselines by XXX%. #prematureSubmission
2
2
10
@RanjayKrishna
Ranjay Krishna
8 years
Version-1.2 of @VisualGenome dataset has been released: MORE data, CLEANER annotations, DENSER graphs.
0
6
10