Welcome to the NeLaMKRR workshop series, bringing together researchers and practitioners exploring the intersections of language models, knowledge representation, and reasoning across diverse domains including medicine, law, and science.
Workshop Editions
2024
First International Workshop
Associated with the 21st International Conference on Principles of Knowledge Representation and Reasoning (KR 2024). The inaugural workshop was held on November 4, 2024.
View 2024 Workshop2025
Second International Workshop
Associated with the 22nd International Conference on Principles of Knowledge Representation and Reasoning (KR 2025). Registration is now open!
View 2025 Workshop2026
Third International Workshop
Associated with the Federated Logic Conference 2026 (FLoC 2026). NeLaMKRR 2026 is now part of the SKILLED-LLMs umbrella workshop. Join us for an expanded workshop bringing together three communities!
View 2026 WorkshopAbout the Workshop
Reasoning is an essential component of human intelligence as it plays a fundamental role in our ability to think critically, support responsible decisions, and solve challenging problems. Traditionally, AI has addressed reasoning in the context of logic-based representations of knowledge. However, the recent leap forward in natural language processing, with the emergence of language models based on transformers, is hinting at the possibility that these models exhibit reasoning abilities, particularly as they grow in size and are trained on more data.
The goal of this workshop is to create a platform for researchers from different disciplines and/or AI perspectives, to explore approaches and techniques with the aim to reconcile reasoning between language models using transformers and using logic-based representations. The specific objectives include analyzing the reasoning abilities of language models measured alongside KR methods, injecting KR-style reasoning abilities into language models (including by neuro-symbolic means), and formalizing the kind of reasoning language models carry out.
Topics of Interest
- Language models' reasoning abilities and knowledge representation analysis
- Argumentation, negotiation, and agent-based reasoning in language models
- Infusing KR-style reasoning into language models
- Knowledge injection and extraction mechanisms in language models
- Qualitative assessment of reasoning accuracy in language models
- Techniques for enhancing language model reasoning predictability
- Formalizing language models' reasoning types
- Reasoning applications in medicine, law, and science domains
- Ethics and limitations of reasoning in language models
- Language model reasoning categories: Deductive, Inductive, Abductive
- Formal vs. informal 'common sense' reasoning comparison
- Chain of thought prompting investigation
- Prompting and in-context learning examination
- Problem decomposition strategies exploration
- Rationale engineering studies
- Bootstrapping and self-improvement methods evaluation
- Language models integration with knowledge graphs
- Unstructured data conversion to knowledge graphs
- Domain-specific language models development
- Research on neurosymbolic knowledge representation models