In recent years, rapid advancements in Artificial Intelligence (AI) have revolutionised numerous industries, from healthcare to finance and beyond. However, as AI systems become increasingly integrated into our daily lives, a significant concern that has emerged is the potential erosion of privacy from the vast amounts of personal data collected to train these systems. Federated Learning (FL) has emerged as a plausible solution to address these privacy concerns. It allows machine learning models to be trained collaboratively across devices, ensuring that sensitive data remains on the user’s device, making it an increasingly popular paradigm. Several organisations deploy FL-based pipelines to train such models on their users’ data. This motivates a scenario where these companies would benefit from utilising each other’s data to improve their models. Multilevel Federated Learning is a proposed extension of traditional FL. It employs federated learning across different organisations (silos), which are themselves FL clusters, offering improved model performance. However, Multilevel FL introduces trust concerns between organisations by requiring the participating members to decide on a trusted central aggregator. This work introduces a framework for enabling trust among different FL clusters through a decentralised orchestrator and storage. In particular, our implementation leverages an Etheruem-based Blockchain and InterPlanetary File System (IPFS) as the orchestrator and storage, respectively and achieves similar accuracy to Multilevel FL on various workloads while alleviating trust concerns.