About Me

I’m Binglin (Kevin) Ji, a second-year master’s student in Electrical Engineering and Computer Engineering at Washington University in St. Louis. I’m currently a member of Stream Based Supercomputing Lab, advised by Prof. Roger Chamberlain. My research interests lie in Machine Learning (Generative Modeling) and Parallel Computing (AI Inference Acceleration), as well as the intersection of these two areas.

Before coming to WashU, I worked at Lenovo Research, Shanghai, China, where I developed deep learning algorithms to solve computer vision problems in industrial scenarios and built container-based systems to optimize machine learning workflows.

🔍 Online Active Target Discovery with Generative Model

Strategic sampling within a limited sampling budget from unobserved regions is essential in various scientific and engineering domains. We model this problem as Active Target Discovery(ATD) and introduce novel frameworks that leverage diffusion dynamics to solve ATD problems.

Active Target Discovery under Uninformative Prior

Project Page With zero domain knowledge, inspired by neuro-science, we introduce EM-PTDM to solve the online feedback ATD problem. More details in our paper: Active Target Discovery under Uninformative Prior: The Power of Permanent and Transient Memory (arXiv)

Diffusion-guided Active Target Discovery

Project Page With enough domain knowledge data, we first time introduce DiffATD to solve Online Feedback Active Target Discovery problem in Partially Observable Environments. More details in our paper: Online Feedback Efficient Active Target Discovery in Partially Observable Environments (arXiv)

⚙️ Optimizing GCN Inference on Multi-Core Systems

Project Page Existing standard GNN libraries face challenges in performance and scalability on modern multi-core systems, especially for large graphs (more than 100,000 vertices) with heavy embeddings. We optimized GCN inference with different parallel strategies according to the graph properties, considering the design trends of multi-core architectures. As a result, we achieved up to 2.64x inference speed compared to DGL v2.4.0 (Deep Graph Library) and 3.36x compared to PyG v2.6.1 (PyTorch-Geometric), both of which used PyTorch v2.3.1 as the backend.

More details in our paper: FGI: Fast GNN Inference on Multi-Core Systems (IPDPS 2025 Workshops) 🚀

🎧 Outside of research, I enjoy Rock Music

You should check this out — one of my all-time favorites: Bon Jovi - Livin’ on a Prayer (Hyde Park 2011)