Artificial neural networks now achieve human-level performance on tasks such as image and speech recognition, raising the question of whether they should be taken seriously as models of biological sensory systems. These models exhibit human-like patterns of behavior and the feature spaces from these models reliably predict brain activity. On the other hand, neural network models can often be fooled by small adversarial perturbations that have no effect on humans. In this talk, I will detail our work using model metamers to investigate the similarities of neural networks and the human visual and auditory systems. Model metamers are physically distinct stimuli that produce nearly the same response within a model and thus the same model prediction. Our results show that despite replicating aspects of behavior and neural responses, present-day deep neural networks learn invariances that deviate markedly from those of biological sensory systems, but that model metamers may help guide future model refinements to reduce or eliminate these discrepancies.