Automated driving and advanced driver assistance systems benefit from complete understandings of traffic scenes around vehicles. Existing systems gather such data through cameras and other sensors in vehicles but scene understanding can be limited due to the sensing range of sensors or occlusion from other objects. To gather information beyond the view of one vehicle, we propose and explore FusionEye-a connected vehicle system that allows multiple vehicles to share perception data over vehicle-to-vehicle communications and collaboratively merge this data into a more complete traffic scene. FusionEye uses a self-adaptive topology merging algorithm based on bipartite graph. We explore its network bandwidth requirements and the trade-off with merging accuracy. Experimental results show that FusionEye creates more complete scenes and achieves a merging accuracy of 88% with 5% packet drop rate and transmission latency around 200ms. We show that richer vehicle descriptors offer only marginal accuracy improvements compared to lower communication overhead options.