A methodology is presented which allows comparison between models constructed under different modeling paradigms. Consider two models that exist to study different aspects of the same system, namely Air Mobility Command's strategic airlift system. One model simulates a fleet of aircraft moving a given combination of cargo and passengers from an onload point to an offload point. The second model is a linear program that optimizes aircraft and route selection given cargo and passenger requirements in order to minimize late- and non-deliveries. Further, the optimization model represents a more aggregated view of the airlift system than does the simulation. The two models do not have immediately comparable input or output structures, which complicates comparisons between the two models. I develop a methodology to structure this comparison and use it to compare the two large-scale models described above. Models that compare favorably using this methodology are deemed covalid. Models that perform similarly under approximately the same input conditions are considered covalid in a narrow sense. Models that are covalid (in this narrow sense) may hold the potential to be used in an iterative fashion to improve the input (and thus, the output) of one another. I prove that, under certain regularity conditions, this method of output/input crossflow converges, and if the convergence is to a valid representation of the real-world system, the models are considered covalid in a wide sense. Further, if one of the models has been independently validated (in the traditional meaning), then a validation by association of the other model may be effected through this process.