In today's data-driven era, time-series prediction has become an indispensable core component in many fields. However, building a large-scale time series prediction model that combines powerful performance with efficient computation is always a great challenge. In addition, the lack of high-quality large-scale public time series databases further exacerbates this problem.
Recently, an international team of Chinese researchers from Princeton University, Squirrel AI, Griffith University, and other places around the world have worked together to innovatively propose Time-MoE, a fundamental time series model based on the Mixture of Experts (MoE) architecture. Time-MoE is the first time to push the parameter scale of time series pre-training model to the billion level, which is a milestone breakthrough in the field of time series prediction.Time-MoE model, through the unique advantages of MoE architecture, has successfully scaled up the model parameters to 2.4 billion, which not only improves the prediction accuracy, but also surpasses the existing models while reducing the computational cost, and fully reaches the SOTA (State of the Art) level. State of the Art) level.