A LLM · NLP
Text2All · All2All
Multi-modal · Multi-task
Let’s find out the latest and various LLM-related papers.
New Papers
- MoMa: Efficient Early-Fusion Pre-training with Mixture of Modality-Aware Experts
- MoVA: Adapting Mixture of Vision Experts to Multimodal Context
- Toward Self-Improvement of LLMs via Imagination, Searching, and Criticizing
- Chat Vector: A Simple Approach to Equip LLMs with Instruction Following and Model Alignment in New Languages
- From r to Q∗: Your Language Model is Secretly a Q-Function
- Mamba: Linear-Time Sequence Modeling with Selective State Spaces
- Phi-3 Technical Report: A Highly Capable Language Model Locally on Your Phone
- DoRA: Weight-Decomposed Low-Rank Adaptation
- Many-Shot In-Context Learning
Before 2024
- Unleashing Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration
- LLM-Blender: Ensembling Large Language Models with Pairwise Ranking and Generative Fusion
- LeanDojo: Theorem Proving with Retrieval-Augmented Language Models
- ViperGPT: Visual Inference via Python Execution for Reasoning
- LongNet: Scaling Transformers to 1,000,000,000 Tokens
- Image as a Foreign Language: BEiT Pretraining for All Vision and Vision-Language Tasks
- Gorilla: Large Language Model Connected with Massive APIs
- Chameleon: Plug-and-Play Compositional Reasoning with GPT-4
- LLaMA-Adapter V2: Parameter-Efficient Visual Instruction Model
- Generative Agents: Interactive Simulacra of Human Behavior
- Reflexion: an autonomous agent with dynamic memory and self-reflection
- Self-Refine: Iterative Refinement with Self-Feedback
- HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in HuggingFace
- Auto-GPT: An Autonomous GPT-4 Experiment
Troubleshooting
If you encounter issues while navigating the papers or accessing resources, consider these troubleshooting steps:
- Ensure your internet connection is stable.
- Try refreshing the page if the links aren’t responding.
- Clear your browser’s cache and cookies if links are not opening correctly.
- Check if there are any restrictions set by your network that may block external links or resources.
- For additional help or collaboration on AI projects, feel free to reach out to peers or check forums.
- For more insights, updates, or to collaborate on AI development projects, stay connected with fxis.ai.
At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.
Understanding the Code
Imagine you are the conductor of an orchestra, and the LLM papers are the instruments. Each paper lays down a unique sound, just like each instrument contributes to the symphony. As you study the structure and interrelations between these papers, you begin to understand how the different modes of language modeling, such as MotionGPT or different training methodologies like LoRA, harmonize into a cohesive melody that forms the foundation of advanced AI research. Your role is to tune these sounds and make them collaborate effectively to achieve a magnificent piece that captivates your audience, or in this case, advances the field of AI.