We present a novel technique to unwrap the black box of deep ReLU networks and repack it into a white box of local linear models. A convenient LLM-based toolkit (called Aletheia) is developed for deep ReLU network model interpretation, diagnostics, and simplification. It includes scikit-learn, TensorFlow/Keras and PyTorch implementations, and provides fast and scalable computing based on GPUs. We share several examples to demonstrate that deep ReLU networks are indeed interpretable and self-explanatory.