This was sorta hinted at earlier, but let me be more explicit:
Here's one thing you can do immediately (if you're not already
doing it) that will give you a better feel for what's going on:
1) Collect the observed data.
2) Fit to the observed data. Write down the fitted parameters.
Call this set O (for observed).
3) Generate some Monte Carlo data, as a function of time, using
the O parameters.
4) Fit to the Monte Carlo data. This will give a new set of
parameters. Call this set MC(i).
5) Repeat steps 3 and 4 for all i from 1 to 200 or so.
6) Make a scatter plot of each pair of parameters. That's a
2D plot with one parameter on each axis. For N parameters,
there will be N(N-1)/2 such plots, each with 200 scattered points.
The halfwidth of the distribution will tell you the uncertainty.
If there are significant correlations, they will leap out at you.
If there are correlations, we can have a discussion about what
to do next.
Quantitative numbers are nice, but there's no substitute for just
looking at the data. Any serious data analysis project instantly
becomes a data visualization project.
Data miner's motto:
When all else fails,
look at the data.