A Performance Prediction-based DNN Partitioner for Edge TPU Pipelining
Résumé
Intelligent IoT applications deployed in adversarial environments often operate without reliable cloud connections, requiring local execution of AI pipelines on resource-constrained edge devices. Edge Tensor Processing Unit (TPU) is a specialized AI hardware accelerator known for its low power consumption and high computational efficiency. To optimize DNN performance across multiple Edge TPUs, DNN models are often pipelined by partitioning them into segments. However, uneven workload distribution across these segments can lead to latency bottlenecks, reducing overall throughput, and increasing memory access due to the limited on-chip memory. This issue is especially concerning in mission-critical applications, where minimizing memory contention and ensuring robust performance are critical. To overcome these challenges, we develop a novel performance prediction-based partitioning tool for DNN models on Edge TPU pipelines. This tool uses a Transformer-based model to accurately predict the inference time of individual DNN segments, enabling more efficient partitioning. We introduce two methods: one
relying solely on the prediction model and another combining prediction with profiling. Tested on 120 models from the NASBench-101 dataset, both methods significantly improved partitioning robustness and efficiency, reducing solving time by up to 98.86% and 97.21%, respectively, compared to traditional profiling-based approaches, while maintaining comparable bottleneck latencies.
Origine | Fichiers produits par l'(les) auteur(s) |
---|