The performance of deep learning models relies heavily on high-quality datasets. However, under the prevailing paradigm of pretrained-model–based applications, once a dataset is publicly released, data owners largely lose control over how it is used for model training. Existing data protection approaches either focus on post hoc accountability or irreversibly destroy the training utility of the data through perturbations, making it difficult to simultaneously satisfy the dual requirements of offline public sharing, authorized and controllable usage. In this paper, we propose DeadMap, a reversible training usability control framework designed for model fine-tuning scenarios. DeadMap introduces a secret label permutation combined with synchronized multi-layer feature-alignment perturbations. While preserving visual imperceptibility and keeping the surface labels unchanged, this design causes fine-tuning without the correct key to suffer severe performance degradation. In contrast, authorized users only need to possess a lightweight label-mapping key and apply a simple label remapping during training to recover performance close to that achieved with the original datasets. Experimental results show that DeadMap can effectively establish a significant performance difference between authorized and unauthorized settings, thus providing a lightweight and practical solution for balancing open data sharing and the controlled use of high-value datasets.



