I'm currently considering a project to simulate the (Earth-seeing) coverage of 1 or more satellites over the course of a year. The point would be to compare how different orbits result in different amounts of coverage, both of the Earth's surface and objects placed there.
The code for this would be written using Golang, in an attempt to increase efficiency/speed of calculation over other languages like Python. As there aren't many orbital propagators for Golang, this would mean writing my own orbit propagation algorithm (both kinetic and kinematic), or using one of the few available ones, such as the SGP4-based
go-satellite package. There would also be multi-processing involved.
As such, I'm trying to wrap my head around which orbit propagation methodology would be useful for this kind of simulation. Given that the simulation would be, well, simulating an entire year, errors would likely accumulate for any propagation method I choose. However, Fidelity should be high for short-term analysis, and reasonable for long-term. I'm also interested in what kind of CPU resource requirements are necessary for the different propagator models/methods/algorithms, too – I'd ideally not require a supercomputing cluster.
As such, my question is as the title stated it: how important is the choice I make in orbit propagation model for this kind of simulation what would really be the importance/magnitude of error, and which propagation method is best suited for this kind of thing?