The Data Technologies Financial Instruments Datasets team designs and builds complex data processing pipelines to get data from external sources (websites, filings, documents, FTPs, feeds, news, etc) and into Bloomberg's ecosystem. We then transform the data into standardized consumable schema payloads for teams downstream to display via various client-facing UI's/API's for financial instrument products including Corporate Bonds, Municipal Bonds, Government Bonds, Loans, Funds, Portfolio Holdings, and Indices.
This critical security level data is the lifeblood of the Bloomberg Terminal and Enterprise products. Our team is rebuilding these workflows from scratch. This includes brand-new JSON/AVRO format schema-based data models stored in big data platforms and leveraging pipes/feeds for downstream consumption, data processing pipelines and pre-publish business rules validation engines using Python-based microservices, newly trained machine learning (ML) and natural language processing (NLP) models and entity recognition/disambiguation mapping services integrated into the overall flow to identify, extract, and enrich the data, and manual remediation worklists and UI's for analysts and vendors to correct the data when the automation fails and manual intervention is required. Some of these brand-new workflows are already underway and many are set to begin soon. We are looking to add skilled engineers to help us design and build these full end-to-end data processing workflows. They will fully replace the existing legacy workflows with a focus on improving the accuracy, coverage, timeliness, and discoverability of the data we collect, as well as greatly improving the overall efficiency of our Global Data analysts and vendors who monitor these workflows.
If the idea of getting to design and build a brand-new data processing pipeline/workflow to replace an existing one, where you can have a huge impact on the data quality and efficiency of collecting data that is vital to Bloomberg's success, interests you, please read on and apply to this job posting!
We will trust you to:
Take ownership of the critical products and their designs, and then build and iterate at a rapid and incremental pace.
Work with a variety of technologies to develop innovative solutions.
Collaborate proactively in an agile, fast-paced team that works closely with other engineering teams, product/business teams, and data teams.
Be a passionate problem solver and think outside the box and take smart, calculated risks to deliver the highest business value.
Deliver on time without compromising on quality, and while displaying strong software craftsmanship.
You will need to have:
3+ years of experience in designing and implementing complex full stack software applications.
Familiarity with database programming and data modeling (SQL and JSON/AVRO and database/schema design).
Strong communication and interpersonal skills.
Strong analytical and creative problem solving skills.
BA, BS, MS, or PhD in Computer Science, Software Engineering, or related technology fields.
We would love to see:
Experience building critical pipelines of data flowing from original un-structured source documents to structured/mapped/schema-based data stores for downstream consumption, with an emphasis on automation over manual remediation, and on scalability/configurability.
An understanding of the design, implementation, and deployment of high performance, high availability, large-scale applications in a distributed environment.
Familiar with User interface (UI) design and user experience (UX) principles.
Experience in C#/Java/.NET.
Bloomberg is an equal opportunities employer, and we value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.