The point where experts and best companies meet
Share
What You’ll Do
Support the design and development of scalable data architectures and systems that extract, store, and process large amounts of data
Build and optimize data pipelines for efficient data ingestion, transformation, and loading from various sources while ensuring data quality and integrity
Collaborate with Data Scientists, Machine Learning Engineers, Business Analysts and/or Product Owners to understand their requirements and provide efficient solutions for data exploration, analysis, and modeling
Implement testing, validation and pipeline observability to ensure data pipelines are meeting customer SLAs
Use cutting edge technologies to develop modern data pipelines supporting Machine Learning and Artificial Intelligence
Basic Qualifications:
Bachelor’s Degree
At least 2 years of experience in application development (Internship experience does not apply)
At least 1 year of experience in big data technologies
Preferred Qualifications:
3+ years of experience in application development including Python, Scala, or Java
1+ years of experience using Spark
1+ years of experience working on data stream systems (Kafka or Kinesis)
1+ years of data warehousing experience (Redshift or Snowflake)
1+ years of experience with Agile engineering practices
1+ years of experience working with a public cloud (AWS, Microsoft Azure, Google Cloud)
. Eligibility varies based on full or part-time status, exempt or non-exempt status, and management level.
If you have visited our website in search of information on employment opportunities or to apply for a position, and you require an accommodation, please contact Capital One Recruiting at 1-800-304-9102 or via email at . All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodations.
These jobs might be a good fit