Senior Visual Navigation and Object Detection Algorithm Engineer (Remote)
- ¥180k – ¥300k
- Remote •
- 3 years of exp
- Full Time
Not Available
Remote only
About the job
About Us:
Flyward is a technology company originating from Silicon Valley in the United States, dedicated to pushing the boundaries of innovation in the field of fully autonomous eVTOL/flying cars. Our team is composed of vibrant individuals who are passionate about technology and share a common vision to create breakthrough solutions. Leveraging advanced artificial intelligence and autonomous navigation technologies, we aspire to turn the concept of urban air mobility into reality, ushering in a new era where convenient and efficient flight becomes accessible to every household. Join us and contribute to groundbreaking advancements in this field. If you are passionate about pushing the boundaries of innovation, we invite you to become a member of our dynamic team!
Position Overview:
As a Visual Navigation and Object Detection Algorithm Engineer, you will play a key role in our machine vision and aerial vehicle AI intelligent software development. You will be responsible for designing, developing, and optimizing visual positioning and navigation algorithms, AI image processing algorithms, object detection and recognition, situational awareness, and the application of visual language large models. Through your expertise in computer vision and software engineering, you will help us create the core technology for the next generation of fully autonomous flying cars and drones, ensuring safety and reliability in complex environments.
Job Responsibilities:
- Visual Navigation and Positioning Algorithms: Develop visual positioning and navigation algorithms based on camera images to ensure accuracy, robustness, and real-time performance.
- Object Detection and Recognition: Implement the recognition and detection of ground and aerial targets to achieve autonomous safe landing by identifying runways and airports.
- Situational Awareness: Conduct semantic segmentation of images and situational understanding to enhance the autonomy of aerial vehicles in complex environments.
- Image Processing Algorithms: Use deep learning models for dehazing, denoising, shadow removal, distortion correction, and color correction to improve image quality.
- Multithreading: Design and implement high-performance multithreading systems to optimize system resource utilization and parallel processing capabilities, meeting the requirements for real-time performance and low latency.
- VLM Application: Integrate multimodal visual large model technology to enhance the adaptability and robustness of visual algorithms.
- Performance Optimization: Continuously optimize software performance to address challenges related to scalability and computational resource constraints.
- Testing and Verification: Develop and implement software-side testing to verify the reliability and accuracy of visual positioning software under various conditions and scenarios.
Qualifications:
- A master's degree in Computer Science, Robotics, or a related field with 2+ years of work experience (or a bachelor's degree with 3+ years of relevant work experience).
- Proficiency in C++ and Python, with experience using deep learning models on C++ platforms.
- Extensive practical experience in developing computer vision navigation or image processing algorithms: at least 3 years of experience in computer vision and image processing, proficient in OpenCV, familiar with machine learning frameworks (PyTorch, LibTorch, TensorFlow, Caffe, etc.), familiar with deep learning models (CNN, RNN, Transformer, VLM, etc.) and have relevant project experience.
- Extensive practical experience in developing and implementing SLAM algorithms and sensor fusion algorithms, with at least 2 years of relevant experience.
- Familiar with ROS 2 (especially the Humble version), with relevant project experience preferred.
- Familiar with multithreading programming models (such as POSIX threads, C++ std::thread, ROS2's Executor model), able to write thread-safe code, and effectively address concurrency issues such as race conditions and deadlocks.
- Knowledge of cameras and drones, including camera calibration, geometric computer vision, image capture techniques, and drone photogrammetry.
- Extensive experience in object recognition, object detection, image semantic segmentation, and situational understanding.
Preferred Skills:
- Experience in visual positioning technology, visual navigation technology, and visual odometry, such as optical flow navigation and feature tracking navigation.
- Ability to use deep learning models for image matching, such as SuperPoint and SuperGlue.
- Experience in software development for drones, autonomous vehicles, robots, or autonomous systems.
- Experience in high-altitude drone image processing is preferred, with experience in processing large image datasets and real-time video processing is preferred.
- Fluent in English, able to read and understand cutting-edge academic papers from domestic and international sources.
Benefits:
- Competitive salary and project completion bonuses.
- Profit-sharing plan.
- Flexible working hours and 100% remote work option.
- Broad opportunities for career growth and learning innovative technologies.
- Collaborative and inclusive company culture.
- Practical experience with cutting-edge technology and real-world applications.
Work Location:
Location is not limited, and 100% remote work is possible.
Working Hours:
There are no fixed daily working hour requirements, but work quality and task completion must be ensured.
Work Location:
We accept 100% remote work. Work from any location and at any time.
How to Apply:
If you are passionate about future urban air technology and have the professional skills we need, please send your resume, related works and project showcases to: [email protected]. We look forward to exploring the infinite possibilities of next-generation transportation in the sky with you.