Abstract: Wearable and mobile devices are widely used for crowdsensing, as they come with many sensors and are carried everywhere. Among the sensing data, videos annotated with temporal-spatial metadata contain huge amount of information, but consume too much precious storage space. In this paper, we solve the problem of optimizing cloud-based video crowdsensing in three steps. First, we study the optimal transcoding problem on wearable and mobile cameras. We propose an algorithm to optimally select the coding parameters to fit more videos at higher quality on wearable and mobile cameras. Second, we empirically investigate the throughput of different file transfer protocols from wearable and mobile devices to cloud servers. We propose a real-time algorithm to select the best protocol under diverse network conditions, so as to leverage the intermittent WiFi access. Last, we look into the performance of cloud databases for sensor-annotated videos, and implement a practical algorithm to search videos overlapping with a target geographical region. Our measurement study on three popular opensource cloud databases reveals their pros and cons. The three proposed algorithms are evaluated via extensive simulations and experiments. The evaluation results show the practicality and efficiency of our algorithms and system. For example, our proposed transcoding algorithm outperforms existing approaches by 12 dB in video quality, 87% in energy saving, and one-quarter in delivery delay. Another example is, by intelligently choosing a proper cloud database; our system may reduce the insertion time by up to one-third, or the lookup time by up to one-fourth.
Keywords: Crowd Sensing and Crowd Sourcing, Cloud Ser-vices, Mobile and Ubiquitous Systems, Efficient Communications and Networking.