IMPACT SCORE JOURNAL RANKING CONFERENCE RANKING Conferences Journals Workshops Seminars SYMPOSIUMS MEETINGS BLOG LaTeX 5G Tutorial Free Tools
FL@FM-NeurIPS 2023 : International Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS 2023
FL@FM-NeurIPS 2023 : International Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS 2023

FL@FM-NeurIPS 2023 : International Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS 2023

New Orleans, LA, USA
Event Date: December 16, 2023 - December 16, 2023
Submission Deadline: October 02, 2023
Notification of Acceptance: October 27, 2023




Call for Papers

[Call for Papers]
Training machine learning models in a centralized fashion often faces significant challenges due to regulatory and privacy concerns in real-world use cases. These include distributed training data, computational resources to create and maintain a central data repository, and regulatory guidelines (GDPR, HIPAA) that restrict sharing sensitive data. Federated learning (FL) is a new paradigm in machine learning that can mitigate these challenges by training a global model using distributed data, without the need for data sharing. The extensive application of machine learning to analyze and draw insight from real-world, distributed, and sensitive data necessitates familiarization with and adoption of this relevant and timely topic among the scientific community.

Recently, foundation models such as ChatGPT have revolutionized the field of machine learning by demonstrating remarkable capabilities across a wide range of tasks. These models have democratized the development of machine learning models, empowering developers to focus more on tuning a foundation model to their specific task rather than building complex models from scratch. This paradigm shift has the potential to remove the barriers to entry for machine learning development, and enables a broader community of developers to create high-quality models.

However, as the model development process itself becomes increasingly accessible, a new bottleneck emerges: computation power and data access. While foundation models have the potential to perform exceptionally well across various tasks, they pose two challenges: 1) training them requires vast amounts of training data and compute power, and 2) fine-tuning them to specific applications requires specialized and potentially sensitive data. Acquiring and centralizing datasets for both training and fine-tuning poses several challenges, including data privacy concerns, legal constraints (such as GDPR, HIPAA), and computational burdens.

FL is a promising solution to address these challenges in the era of foundation models. The fundamental goal of federated learning is to train models collaboratively across decentralized devices or data silos while keeping the data securely on those devices or within specific organizations. By adopting federated learning approaches, we can leverage the vast amounts of distributed data and compute available across different sources while respecting privacy regulations and data ownership.

The rise of foundation models amplifies the importance and relevance of FL as a crucial research direction. With foundation models becoming the norm in machine learning development, the focus shifts from model architecture design to tackling the issues surrounding privacy-preserving and distributed learning. Advancements in FL methods have the potential to unlock the full potential of foundation models, enabling efficient and scalable training while safeguarding sensitive data.

With this in mind, we invite original research contributions, position papers, and work-in-progress reports on various aspects of federated learning in the age of foundation models. Since the emergence of foundation models has been a relatively recent phenomenon, their full impact on federated learning has not yet been well explored or understood. We hope to provide a platform to facilitate interaction among students, scholars, and industry professionals from around the world to discuss the latest advancements, share insights, and identify future directions in this exciting field. The workshop topics include but are not limited to the following.

Theory and algorithmic foundations:
-Impact of heterogeneity in FL of large models
-Multi-stage model training (e.g., base model + fine tuning)
-Optimization advances in FL (e.g., beyond first-order and local methods)
-Prompt tuning in federated settings
-Self-supervised learning in federated settings

Leveraging foundation models to improve federated learning:
-Adaptive aggregation strategies for FL in heterogeneous environments
-Foundation model enhanced FL knowledge distillation
-Overcoming data interoperability challenges using foundation models
-Personalization of FL with foundation models

Federated learning for training and tuning foundation models:
-Fairness, bias, and interpretability challenges in FL with foundation models
-Federated transfer learning with foundation models
-FL techniques for traning large-scale foundation models
-Hardware for FL with foundation models
-Optimization algorithms for federated training of foundation models
-Privacy-preserving mechanisms in FL with foundation models
-Resource-efficient FL with foundation models
-Security and robustness considerations in FL with foundation models
-Systems and infrastructure for FL with foundation models
-Vertical federated learning with foundation models
-Vulnerabilities of FL with foundation models

[Submission Instructions]
Submissions should be no more than 6 pages long, excluding references, and follow NeurIPS'23 template. Submissions are double-blind (author identity shall not be revealed to the reviewers), so the submitted PDF file should not include any identifiable information of authors. An optional appendix of any length is allowed and should be put at the end of the paper (after references).

Submissions are collected on OpenReview at the following link: TBA.
Accepted papers and their review comments will be posted on OpenReview in public. Due to the short timeline, we will not have a rebuttal period, but the authors are encouraged to interact and discuss with reviewers on OpenReview after the acceptance notifications are sent out. Rejected papers and their reviews will remain private and not posted in public.

For questions, please contact: [email protected]

[Proceedings and Dual Submission Policy]
Our workshop does not have formal proceedings, i.e., it is non-archival. Accepted papers will be available in public on OpenReview together with the reviewers' comments. Revisions to accepted papers will be allowed until shortly before the workshop date.

We welcome submissions of unpublished papers, including those that are submitted to other venues if that other venue allows so. However, papers that have been accepted to an archival venue as of Sept. 28, 2023 should not be resubmitted to this workshop, because the goal of the workshop is to share recent results and discuss open problems. Specifically, papers that have been accepted to NeurIPS'23 main conference should not be resubmitted to this workshop.

[Presentation Format]
The workshop will primarily take place physically with in person attendance. For presenters who cannot attend in person, it is planned to be made possible to connect remotely over Zoom for the oral talks. However, the poster sessions will be in-person only. Depending on the situation, we may include a lightening talk session for accepted poster presentations where the presenters cannot attend physically, or organize a separate virtual session after the official workshop date. If a paper is accepted as an oral talk, the NeurIPS organizers require a pre-recording of the presentation by early November, which will be made available for virtual participants to view. All accepted papers will be posted on OpenReview and linked on our webpage.

[Invited Speakers]
-Cho-Jui Hsieh, Associate Professor, University of California, Los Angeles, USA
-Michael I. Jordan, Distinguished Professor, University of California, Berkeley, USA
-Jayashree Kalpathy-Cramer, Professor, University of Colorado, Anschutz, USA
-Lingjuan Lyu, Senior Research Scientist, Sony AI, Japan
-Peter Richtárik, Professor, King Abdullah University of Science and Technology, Saudi Arabia
-Zheng Xu, Senior Research Scientist, Google Research, USA

[Organizing Committee]
-Jinghui Chen (Pennsylvania State University, USA)
-Lixin Fan (Webank, China)
-Gauri Joshi (Carnegie Mellon University, USA)
-Sai Praneeth Karimireddy (UC Berkeley, USA)
-Stacy Patterson (Rensselaer Polytechnic Institute, USA)
-Shiqiang Wang (IBM T. J. Watson Research Center, USA)
-Han Yu (Nanyang Technological University, Singapore)


Summary

FL@FM-NeurIPS 2023 : International Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS 2023 will take place in New Orleans, LA, USA. It’s a 1 day event starting on Dec 16, 2023 (Saturday) and will be winded up on Dec 16, 2023 (Saturday).

FL@FM-NeurIPS 2023 falls under the following areas: ARTIFICIAL INTELLIGENCE, MACHINE LEARNING, FEDERATED LEARNING, FOUNDATION MODELS, etc. Submissions for this Workshop can be made by Oct 2, 2023. Authors can expect the result of submission by Oct 27, 2023.

Please check the official event website for possible changes before you make any travelling arrangements. Generally, events are strict with their deadlines. It is advisable to check the official website for all the deadlines.

Other Details of the FL@FM-NeurIPS 2023

  • Short Name: FL@FM-NeurIPS 2023
  • Full Name: International Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS 2023
  • Timing: 09:00 AM-06:00 PM (expected)
  • Fees: Check the official website of FL@FM-NeurIPS 2023
  • Event Type: Workshop
  • Website Link: https://federated-learning.org/fl@fm-neurips-2023/
  • Location/Address: New Orleans, LA, USA


Credits and Sources

[1] FL@FM-NeurIPS 2023 : International Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS 2023


Check other Conferences, Workshops, Seminars, and Events


OTHER ARTIFICIAL INTELLIGENCE EVENTS

ICCMA--EI 2024: 2024 The 12th International Conference on Control, Mechatronics and Automation (ICCMA 2024)
Brunel University London, UK
Nov 11, 2024
NLPAI 2024: 2024 5th International Conference on Natural Language Processing and Artificial Intelligence (NLPAI 2024)
Chongqing, China
Jul 12, 2024
ICAITE 2024: 2024 the International Conference on Artificial Intelligence and Teacher Education (ICAITE 2024)
Beijing, China
Oct 12, 2024
Informed ML for Complex Data@ESANN 2024: Informed Machine Learning for Complex Data special session at ESANN 2024
Bruges, Belgium
Oct 9, 2024
Effective Grant Writing Using AI 2024: Invitation to Faculty Development Program Effective Grant Writing Strategies Using AI
Online
Mar 12, 2024
SHOW ALL

OTHER MACHINE LEARNING EVENTS

NLPAI 2024: 2024 5th International Conference on Natural Language Processing and Artificial Intelligence (NLPAI 2024)
Chongqing, China
Jul 12, 2024
ICAITE 2024: 2024 the International Conference on Artificial Intelligence and Teacher Education (ICAITE 2024)
Beijing, China
Oct 12, 2024
DL for Neuro-heuristic Brain Analysis 2024: Workshop on Deep Learning for Neuro-heuristic Brain Analysis @ ICANN'24
Lugano, Switzerland
Sep 17, 2024
Informed ML for Complex Data@ESANN 2024: Informed Machine Learning for Complex Data special session at ESANN 2024
Bruges, Belgium
Oct 9, 2024
LearnAut 2024: Learning and Automata
Tallinn, Estonia
Jul 7, 2024
SHOW ALL

OTHER FEDERATED LEARNING EVENTS

FLAWR 2024: Federated Learning Applications in the Real World
Xanthi, Greece
Jun 5, 2024
FL@FM-ICME 2024: International Workshop on Federated Learning and Foundation Models for Multi-Media
Niagara Falls, ON, Canada
Jul 15, 2024
ICMLA 2024: 23rd International Conference on Machine Learning and Applications
Miami, Florida
Dec 18, 2024
FL-ICME 2024: ICME'24 Special Session on Trustworthy Federated Learning for Multimedia
Niagara Falls, ON, Canada
Jul 15, 2024
FL@FM-IJCNN 2024: IJCNN'24 Special Session on Trustworthy Federated Learning: in the Era of Foundation Models
Yokohama, Japan
Jun 30, 2024
SHOW ALL

OTHER FOUNDATION MODELS EVENTS

FL@FM-ICME 2024: International Workshop on Federated Learning and Foundation Models for Multi-Media
Niagara Falls, ON, Canada
Jul 15, 2024
FLLM 2024: The 2nd International Conference on Foundation and Large Language Models
Dubai. UAE
Nov 26, 2024
FL-ICME 2024: ICME'24 Special Session on Trustworthy Federated Learning for Multimedia
Niagara Falls, ON, Canada
Jul 15, 2024
FL@FM-IJCNN 2024: IJCNN'24 Special Session on Trustworthy Federated Learning: in the Era of Foundation Models
Yokohama, Japan
Jun 30, 2024
FL@FM-TheWebConf 2024: International Workshop on Federated Foundation Models for the Web 2024
Singapore
May 14, 2024
SHOW ALL