Download the App!
show episodes
 
Artwork
 
A daily update on the latest AI Research Papers. We provide a high level overview of a handful of papers each day and will link all papers in the description for further reading. This podcast is created entirely with AI by PocketPod. Head over to https://pocketpod.app to learn more.
  continue reading
 
Loading …
show series
 
Happy October, friends! This week, we're excited to talk about the third round of new #AnimalCrossingLego sets. Guess who's delighted to see a certain starry-eyed owl? We also talk about a new cozy game full of animals #Floatopia. And JB gives us her Paris Paralympics highlights. --- Read JB's Paralympics Blog: https://www.greatpopculturedebate.com…
  continue reading
 
This week, we've come to eulogize #AnimalCrossing #PocketCamp, and to praise it. We dive into memories of Leaf Tickets, Brake Tapper, and Campsites, and discuss the archive app. We also talk about real-world gardening surprises and fishing adventures. Shoutout to Boopy, Giovanni, and 'Ol What's His Name. --- Patreon Members Only: Don't miss out on …
  continue reading
 
xGen-MM (BLIP-3): A Family of Open Large Multimodal ModelsJPEG-LM: LLMs as Image Generators with Canonical Codec RepresentationsAutomated Design of Agentic SystemsTurboEdit: Instant text-based image editingSurgical SAM 2: Real-time Segment Anything in Surgical Video by Efficient Frame PruningFine-tuning Large Language Models with Human-inspired Lea…
  continue reading
 
The AI Scientist: Towards Fully Automated Open-Ended Scientific DiscoveryMed42-v2: A Suite of Clinical LLMsMutual Reasoning Makes Smaller LLMs Stronger Problem-SolversControlNeXt: Powerful and Efficient Control for Image and Video GenerationCogVideoX: Text-to-Video Diffusion Models with An Expert TransformerFruitNeRF: A Unified Neural Radiance Fiel…
  continue reading
 
It's JB Season, which means the Olympics are here. We have lots to discuss about mascots, Team USA, and all our favorite moments from the first week of the Paris 2024 Olympics. We also catch up on lesser-known annual events that teach us about science. --- Patreon Members Only: View this episode as a Vodcast! --- JB's Olympic Mascot Article: https:…
  continue reading
 
MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language ModelsLLaVA-OneVision: Easy Visual Task TransferAn Object is Worth 64x64 Pixels: Generating 3D Object via Image DiffusionMedTrinity-25M: A Large-scale Multimodal Dataset with Multigranular Annotations for MedicineIPAdapter-Instruct: Resolving Ambiguity in Image-based Co…
  continue reading
 
SAM 2: Segment Anything in Images and VideosGemma 2: Improving Open Language Models at a Practical SizeCoarse Correspondence Elicit 3D Spacetime Understanding in Multimodal Language ModelImproving Text Embeddings for Smaller Language Models Using Contrastive Fine-tuningOmniParser for Pure Vision Based GUI AgentSF3D: Stable Fast 3D Mesh Reconstructi…
  continue reading
 
Diffree: Text-Guided Shape Free Object Inpainting with Diffusion ModelLAMBDA: A Large Model Based Data AgentAMEX: Android Multi-annotation Expo Dataset for Mobile GUI AgentsBetterDepth: Plug-and-Play Diffusion Refiner for Zero-Shot Monocular Depth EstimationVery Large-Scale Multi-Agent Simulation in AgentScopeData Mixture Inference: What do BPE Tok…
  continue reading
 
OpenDevin: An Open Platform for AI Software Developers as Generalist AgentsVILA^2: VILA Augmented VILAHumanVid: Demystifying Training Data for Camera-controllable Human Image AnimationPERSONA: A Reproducible Testbed for Pluralistic AlignmentSV4D: Dynamic 3D Content Generation with Multi-Frame and Multi-View ConsistencyScalify: scale propagation for…
  continue reading
 
Scaling Laws with Vocabulary: Larger Models Deserve Larger VocabulariesScaling Retrieval-Based Language Models with a Trillion-Token DatastoreShape of Motion: 4D Reconstruction from a Single VideoStreetscapes: Large-scale Consistent Street View Generation Using Autoregressive Video DiffusionUnderstanding Reference Policies in Direct Preference Opti…
  continue reading
 
Qwen2 Technical ReportLearning to Refuse: Towards Mitigating Privacy Risks in LLMsThe Good, The Bad, and The Greedy: Evaluation of LLMs Should Not Ignore Non-DeterminismQ-Sparse: All Large Language Models can be Fully Sparsely-ActivatedGRUtopia: Dream General Robots in a City at Scale
  continue reading
 
Skywork-Math: Data Scaling Laws for Mathematical Reasoning in Large Language Models -- The Story Goes OnVideo Diffusion Alignment via Reward GradientsMultimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language ModelQ-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank GradientsMAVIS: Math…
  continue reading
 
Welcome to Sprocket Pob colon something #PocketPod fans! This week, we consider going deep into the #HannaBarbera universe, rejoice in the shortlived #ACNH island tours, and get into the Rar Experience. And please pay no attention to careless whispers. --- Patreon Members Only: View this episode as a Vodcast! --- Support us for $1/$2 on Patreon! ht…
  continue reading
 
Unveiling Encoder-Free Vision-Language ModelsFunAudioLLM: Voice Understanding and Generation Foundation Models for Natural Interaction Between Humans and LLMsAriGraph: Learning Knowledge Graph World Models with Episodic Memory for LLM AgentsRULE: Reliable Multimodal RAG for Factuality in Medical Vision Language ModelsChartGemma: Visual Instruction-…
  continue reading
 
Diffusion Forcing: Next-token Prediction Meets Full-Sequence DiffusionLet the Expert Stick to His Last: Expert-Specialized Fine-Tuning for Sparse Architectural Large Language ModelsPlanetarium: A Rigorous Benchmark for Translating Text to Structured Planning LanguagesInternLM-XComposer-2.5: A Versatile Large Vision Language Model Supporting Long-Co…
  continue reading
 
We-Math: Does Your Large Multimodal Model Achieve Human-like Mathematical Reasoning?ROS-LLM: A ROS framework for embodied AI with task feedback and structured reasoningMMEvalPro: Calibrating Multimodal Benchmarks Towards Trustworthy and Efficient EvaluationLiteSearch: Efficacious Tree Search for LLMWavelets Are All You Need for Autoregressive Image…
  continue reading
 
Scaling Synthetic Data Creation with 1,000,000,000 PersonasHuatuoGPT-Vision, Towards Injecting Medical Visual Knowledge into Multimodal LLMs at ScaleLLaRA: Supercharging Robot Learning Data for Vision-Language PolicyDirect Preference Knowledge Distillation for Large Language ModelsGaussianDreamerPro: Text to Manipulable 3D Gaussians with Highly Enh…
  continue reading
 
OMG-LLaVA: Bridging Image-level, Object-level, Pixel-level Reasoning and UnderstandingStep-DPO: Step-wise Preference Optimization for Long-chain Reasoning of LLMsMUMU: Bootstrapping Multimodal Image Generation from Text-to-Image DataSimulating Classroom Education with LLM-Empowered AgentsSeaKR: Self-aware Knowledge Retrieval for Adaptive Retrieval …
  continue reading
 
The FineWeb Datasets: Decanting the Web for the Finest Text Data at ScaleYouDream: Generating Anatomically Controllable Consistent Text-to-3D AnimalsDiffusionPDE: Generative PDE-Solving Under Partial ObservationAligning Diffusion Models with Noise-Conditioned PerceptionUnlocking Continual Learning Abilities in Language Models…
  continue reading
 
DreamBench++: A Human-Aligned Benchmark for Personalized Image GenerationBigCodeBench: Benchmarking Code Generation with Diverse Function Calls and Complex InstructionsCambrian-1: A Fully Open, Vision-Centric Exploration of Multimodal LLMsEvaluating D-MERIT of Partial-annotation on Information RetrievalLong Context Transfer from Language to Vision…
  continue reading
 
LongRAG: Enhancing Retrieval-Augmented Generation with Long-context LLMsJudging the Judges: Evaluating Alignment and Vulnerabilities in LLMs-as-JudgesComplexity of Symbolic Representation in Working Memory of Transformer Correlates with the Complexity of a TaskTowards Retrieval Augmented Generation over Large Video LibrariesStylebreeder: Exploring …
  continue reading
 
This week we hope (not) to see natural disasters from a safe, social distance while wearing bike helmets. We dig deep into the next sets of #AnimalCrossing #Lego sets and the new Super Mario Land in Orlando. We also talk about #birds, like a lot. --- Patreon Members Only: View this episode as a Vodcast! --- Join our Patreon! https://patreon.com/the…
  continue reading
 
XLand-100B: A Large-Scale Multi-Task Dataset for In-Context Reinforcement LearningMake It Count: Text-to-Image Generation with an Accurate Number of ObjectsChartMimic: Evaluating LMM's Cross-Modal Reasoning Capability via Chart-to-Code GenerationNeedle In A Multimodal HaystackBABILong: Testing the Limits of LLMs with Long Context Reasoning-in-a-Hay…
  continue reading
 
Depth Anything V2An Image is Worth More Than 16x16 Patches: Exploring Transformers on Individual PixelsTransformers meet Neural Algorithmic ReasonersSamba: Simple Hybrid State Space Models for Efficient Unlimited Context Language ModelingOpenVLA: An Open-Source Vision-Language-Action ModelAlleviating Distortion in Image Generation via Multi-Resolut…
  continue reading
 
NaRCan: Natural Refined Canonical Image with Integration of Diffusion Prior for Video EditingMotionClone: Training-Free Motion Cloning for Controllable Video GenerationWhat If We Recaption Billions of Web Images with LLaMA-3?Magpie: Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with NothingPowerInfer-2: Fast Large Language Model I…
  continue reading
 
An Image is Worth 32 Tokens for Reconstruction and GenerationMcEval: Massively Multilingual Code EvaluationZero-shot Image Editing with Reference ImitationThe Prompt Report: A Systematic Survey of Prompting TechniquesTextGrad: Automatic "Differentiation" via Text
  continue reading
 
Autoregressive Model Beats Diffusion: Llama for Scalable Image GenerationHusky: A Unified, Open-Source Language Agent for Multi-Step ReasoningVript: A Video Is Worth Thousands of WordsLighting Every Darkness with 3DGS: Fast Training and Real-Time Rendering for HDR View SynthesisVALL-E 2: Neural Codec Language Models are Human Parity Zero-Shot Text …
  continue reading
 
Mixture-of-Agents Enhances Large Language Model CapabilitiesWildBench: Benchmarking LLMs with Challenging Tasks from Real Users in the WildCRAG -- Comprehensive RAG BenchmarkGenAI Arena: An Open Evaluation Platform for Generative ModelsLarge Language Model Confidence Estimation via Black-Box Access
  continue reading
 
ShareGPT4Video: Improving Video Understanding and Generation with Better CaptionsBitsFusion: 1.99 bits Weight Quantization of Diffusion ModelStep-aware Preference Optimization: Aligning Preference with Denoising Performance at Each StepBuffer of Thoughts: Thought-Augmented Reasoning with Large Language ModelsSF-V: Single Forward Video Generation Mo…
  continue reading
 
Apple announced new Siri features and Apple Intelligence today, Interestingly, Apple already released a paper, titled "Ferret-UI," on how it all works - a multimodal vision-language model capable of understanding widgets, icons, and text on an iOS mobile screen, and reasoning about their spatial relationships and functional meanings. https://arxiv.…
  continue reading
 
Block Transformer: Global-to-Local Language Modeling for Fast InferenceParrot: Multilingual Visual Instruction TuningMobile-Agent-v2: Mobile Device Operation Assistant with Effective Navigation via Multi-Agent CollaborationOuroboros3D: Image-to-3D Generation via 3D-aware Recursive DiffusionLiveSpeech: Low-Latency Zero-shot Text-to-Speech via Autore…
  continue reading
 
Seed-TTS: A Family of High-Quality Versatile Speech Generation ModelsTo Believe or Not to Believe Your LLMI4VGen: Image as Stepping Stone for Text-to-Video GenerationSelf-Improving Robust Preference OptimizationGuiding a Diffusion Model with a Bad Version of Itself
  continue reading
 
MMLU-Pro: A More Robust and Challenging Multi-Task Language Understanding BenchmarkLearning Temporally Consistent Video Depth from Video Diffusion PriorsShow, Don't Tell: Aligning Language Models with Demonstrated FeedbackArtificial Generational Intelligence: Cultural Accumulation in Reinforcement LearningZeroSmooth: Training-free Diffuser Adaptati…
  continue reading
 
Transformers are SSMs: Generalized Models and Efficient Algorithms Through Structured State Space DualityVideo-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video AnalysisPerplexed by Perplexity: Perplexity-Based Data Pruning With Small Reference ModelsKaleido Diffusion: Improving Conditional Diffusion Models with Au…
  continue reading
 
AI Papers Podcast for 06/04/2024 DITTO-2: Distilled Diffusion Inference-Time T-Optimization for Music GenerationGECO: Generative Image-to-3D within a SECOndPLA4D: Pixel-Level Alignments for Text-to-4D Gaussian SplattingDevEval: A Manually-Annotated Code Generation Benchmark Aligned with Real-World Code RepositoriesParrot: Efficient Serving of LLM-b…
  continue reading
 
AI Papers Podcast for 06/03/2024 Jina CLIP: Your CLIP Model Is Also Your Text RetrieverSimilarity is Not All You Need: Endowing Retrieval Augmented Generation with Multi Layered ThoughtsMotionLLM: Understanding Human Behaviors from Human Motions and VideosXwin-LM: Strong and Scalable Alignment Practice for LLMsMOFA-Video: Controllable Image Animati…
  continue reading
 
MAP-Neo: Highly Capable and Transparent Bilingual Large Language Model SeriesT2V-Turbo: Breaking the Quality Bottleneck of Video Consistency Model with Mixed Reward FeedbackLLMs achieve adult human performance on higher-order theory of mind tasksNearest Neighbor Speculative Decoding for LLM Generation and AttributionZipper: A Multi-Tower Decoder Ar…
  continue reading
 
Phased Consistency Model2BP: 2-Stage BackpropagationGFlow: Recovering 4D World from Monocular VideoInstruct-MusicGen: Unlocking Text-to-Music Editing for Music Language Models via Instruction TuningLLaMA-NAS: Efficient Neural Architecture Search for Large Language Models
  continue reading
 
An Introduction to Vision-Language ModelingTransformers Can Do Arithmetic with the Right EmbeddingsMatryoshka Multimodal ModelsI2VEdit: First-Frame-Guided Video Editing via Image-to-Video Diffusion ModelsZamba: A Compact 7B SSM Hybrid ModelLooking Backward: Streaming Video-to-Video Translation with Feature Banks…
  continue reading
 
ConvLLaVA: Hierarchical Backbones as Visual Encoder for Large Multimodal ModelsMeteor: Mamba-based Traversal of Rationale for Large Language and Vision ModelsGrokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of GeneralizationAya 23: Open Weight Releases to Further Multilingual ProgressStacking Your Transformers: A Close…
  continue reading
 
ReVideo: Remake a Video with Motion and Content ControlNot All Language Model Features Are LinearRectifID: Personalizing Rectified Flow with Anchored Classifier GuidanceVisual Echoes: A Simple Unified Transformer for Audio-Visual GenerationDeepSeek-Prover: Advancing Theorem Proving in LLMs through Large-Scale Synthetic DataDense Connector for MLLMs…
  continue reading
 
Your Transformer is Secretly LinearDiffusion for World Modeling: Visual Details Matter in AtariFace Adapter for Pre-Trained Diffusion Models with Fine-Grained ID and Attribute ControlReducing Transformer Key-Value Cache Size with Cross-Layer AttentionOmniGlue: Generalizable Feature Matching with Foundation Model GuidancePersonalized Residuals for C…
  continue reading
 
FIFO-Diffusion: Generating Infinite Videos from Text without TrainingMoRA: High-Rank Updating for Parameter-Efficient Fine-TuningOpenRLHF: An Easy-to-use, Scalable and High-performance RLHF FrameworkImp: Highly Capable Large Multimodal Models for Mobile DevicesOcto: An Open-Source Generalist Robot PolicyTowards Modular LLMs by Building and Reusing …
  continue reading
 
INDUS: Effective and Efficient Language Models for Scientific ApplicationsObservational Scaling Laws and the Predictability of Language Model PerformanceGrounded 3D-LLM with Referent TokensLayer-Condensed KV Cache for Efficient Inference of Large Language ModelsDynamic data sampler for cross-language transfer learning in large language models…
  continue reading
 
Chameleon: Mixed-Modal Early-Fusion Foundation ModelsLoRA Learns Less and Forgets LessMany-Shot In-Context Learning in Multimodal Foundation ModelsCAT3D: Create Anything in 3D with Multi-View Diffusion ModelsGrounding DINO 1.5: Advance the "Edge" of Open-Set Object DetectionDual3D: Efficient and Consistent Text-to-3D Generation with Dual-mode…
  continue reading
 
ALPINE: Unveiling the Planning Capability of Autoregressive Learning in Language ModelsXmodel-VLM: A Simple Baseline for Multimodal Vision Language ModelBEHAVIOR Vision Suite: Customizable Dataset Generation via SimulationNaturalistic Music Decoding from EEG Data via Latent Diffusion ModelsNo Time to Waste: Squeeze Time into Channel for Mobile Vide…
  continue reading
 
VidProM: A Million-scale Real Prompt-Gallery Dataset for Text-to-Video Diffusion ModelsBeyond Scaling Laws: Understanding Transformer Performance with Associative MemoryCoin3D: Controllable and Interactive 3D Assets Generation with Proxy-Guided ConditioningHunyuan-DiT: A Powerful Multi-Resolution Diffusion Transformer with Fine-Grained Chinese Unde…
  continue reading
 
What matters when building vision-language models?RLHF Workflow: From Reward Modeling to Online RLHFSUTRA: Scalable Multilingual Language Model ArchitectureSambaNova SN40L: Scaling the AI Memory Wall with Dataflow and Composition of ExpertsPlot2Code: A Comprehensive Benchmark for Evaluating Multi-modal Large Language Models in Code Generation from …
  continue reading
 
BlenderAlchemy: Editing 3D Graphics with Vision-Language ModelsStylus: Automatic Adapter Selection for Diffusion ModelsAg2Manip: Learning Novel Manipulation Skills with Agent-Agnostic Visual and Action RepresentationsDressCode: Autoregressively Sewing and Generating Garments from Text GuidancePLLaVA : Parameter-free LLaVA Extension from Images to V…
  continue reading
 
Loading …

Skrócona instrukcja obsługi