Profile Picture

Overview

Chronology

2009.03 - 2012.02 Kyunggi High School
2014.03 - 2020.02 Sungkyunkwan University
2019.06 - 2019.08 WeDataLab Mobile Team Intern
2020.01 - 2020.02 Lotte e-Commerce Search Development Team Intern
2020.08 - 2021.03 Lotte e-Commerce Search Development Team
2021.03 - 2022.11 Lotte e-Commerce Data Platform Development Team
2022.11 - 2024.10 Gmarket Seller & SD Engineering Team
2024.10 - 2025.11 Gmarket Pricing Tech Engineering Team
2025.11 - Current Gmarket MarTech Engineering

Core Themes

How I Repeatedly Create Value

Performance tuning and large-scale data processing

I have repeatedly worked on high-cost data paths: converting 400GB-scale synchronization into stream-based processing, reducing pricing jobs from 1.5 hours to 10-20 minutes, and cutting inquiry lookup time from 1 minute to 0.3 seconds.

My default approach is to remove unnecessary work first, then stabilize the processing path so it scales operationally.

Automation and operational efficiency

I prefer turning repeated human work into systems. That shows up in Jenkins orchestration, DB monitoring and alerts, .http test flows, husky/githook automation, and multiple personal Chrome Extension tools.

The goal is consistent: fewer manual steps, fewer mistakes, and faster operational feedback.

Architecture and platform setup

I often take the role of laying the first usable foundation: Spring Batch, Spring Kafka, React/Next.js base projects, or a Java extension environment on top of a shared .NET admin system.

Rather than aiming for theoretical purity first, I prefer structures that teams can actually run, extend, and debug.

Execution across planning, UX, and engineering

Even in back-office products, I tend to work beyond implementation only. Inquiry flows, temporary save UX, and operator-facing matching workflows are examples where I changed both the product behavior and the technical structure together.

Selected Work

Selected Projects

2019
WeDataLab WeDataLab
WeDataLab Icon

[Intern] Development of a Simple Webview App Using React Native

Developed a simple app using Webview to enable an existing website to run on mobile.

Added functions for cookie and session management and configured the navigator.

2020
Lotte e-Commerce Lotte e-Commerce
LotteOn Icon

[Intern] Participated in Integrated EC Development (now so-called Lotte ON) and Lotte Department Store's Premium Site Development

Developed the 'Best' catalog page and a hamburger menu for the Lotte Department Store Premium site.

Configured screens using Freemarker by combining server APIs with HTML from web publishers.

Implemented a function to list A-Z brand names and display the starting index of a new alphabet (e.g., B when transitioning from Azure to Barbie). However, encountered memory shortage issues when using Map structures in Freemarker.

Optimized by storing only the first letter of the previous alphabet in memory and marking the index when a new alphabet appeared in a for loop, reducing memory load and successfully implementing the feature.

LotteOn Icon

Admin Page Development for Client's Search Backlogs

Developed a back-office system to simply retrieve client search term backlog data stored in AuroraDB.

Built using Spring Web and basic HTML, CSS, and JS.

2021
Lotte e-Commerce Lotte e-Commerce
LotteOn Icon

Development of a automated program for transferring price data

Developed code to sequentially download approximately 130,000 Excel files using Python and BeautifulSoup to collect all data from another site's seller admin.

Implemented a pipeline that loads the downloaded Excel files into the DB in bulk using the LOAD DATA command. To solve speed degradation and memory problems due to large data processing, adopted a sequential processing method by file unit.

Tested with Parallels considering OS differences (Windows environment) so that the operation team can execute it directly, and then converted it into an executable file (.exe) using PyInstaller for distribution. Provided an environment that is easy for non-developers to use.

LotteOn Icon

Building DB monitoring function

Developed a bash program that notifies the DB status value with Slack alarm.

Monitors DB deadlocks by periodically checking Performance_schema and Information_schema.

Designed to be checked every minute through cron, and backgrounded through tmux.

LotteOn Icon

Changed PCS domain's back office admin to use React

Renewed the back office frontend composed of existing axboot.js to React.

Performed load balancing work by adding Nginx to the Apache httpd root.

Maximized reusability by separating components into minimum units, shortening the average period of subsequent tasks by 2 weeks.

2022
Lotte e-Commerce Lotte e-Commerce (~2022.11) Gmarket Gmarket (2022.11~)
LotteOn Icon

Test execution and Job Orchestration through Jenkins

Built a system that integrates and manages build, test, deployment, execution, log check, and server pre/post relationships in Jenkins. Automated large-capacity data processing tasks based on Spring Batch without server access.

Improved the inefficient process where developers had to manually run Batch code and check logs by accessing the server. Implemented CI/CD integration and job dependency management using Jenkins Pipeline.

Designed to be dynamically adjustable using Parameterized Trigger and Groovy script to solve the difficulty of adding order guarantee and retry logic when running multiple jobs.

LotteOn Icon

Transition from Batch to Stream for product price data's synchronization

Improved the product price information DB synchronization process for dynamic pricing. The previous method was to read data with batch after DB replication, processing 400GB of data.

Problem: Since the Batch method first truncates and then refills the price information, an error occurs due to data blank when another dynamic pricing job is executed during synchronization. To solve this problem, a strategy was introduced to stack data in a temporary table instead of Truncate and then replace the table with RENAME.

Additional issue: As the number of products increased, the synchronization job time itself became longer and the jobs overlapped. Finally, after initial loading with mysqldump, a Kafka consuming app was developed to reflect only the changes in real time.

Achievement: Expanded the number of discounted products from 50 million to 280 million, and achieved real-time price data. Enhanced stability by designing Poison Pill processing logic and Row Lock prevention transactions.

LotteOn Icon

Renewal of main Batch of product price change system

Optimized the price change processing speed by applying Kafka-based parallel processing and Producer-Consumer pattern.

Shortened the process time from 1.5 hours to 10-20 minutes using branch processing and parallel processing through Enum and Maps.

Achieved the number one lowest price share in Naver (as of December 2021) by catching up with the competitor's lowest price share speed. Implemented a response tailored to the company's strategy by adjusting the discount ratio to solve the problem of simple sales growth and net profit reduction.

2023
Gmarket Gmarket
Gmarket Icon

Built the real-time product-page inflow aggregation pipeline for seller analytics

Built the product-page inflow aggregation pipeline required to open seller analytics, while introducing Spring Batch and Spring Kafka Consumer foundations for the first time in the team.

Instead of fully parsing oversized JSON string logs that also contained cookies and other irrelevant metadata, I extracted host/path first with substring-based filtering and then applied Regex and MutableMap-based secondary selection for only meaningful product-page events.

This reduced unnecessary parsing cost, improved CPU efficiency, and shortened processing time by more than 20% on high-volume traffic.

I also added Kibana dashboards and Microsoft Teams/Slack webhook alerts so the team could detect pipeline errors immediately.

Gmarket Icon

Built the seller analytics system and designed its data pipeline

When seller analytics was introduced for the first time during the seller-site renewal, I had to design not only the UI but also how the data itself would be collected, accumulated, and aggregated.

At the time, most batch jobs were still running inside a single Spring Web application with @Scheduled, which caused overlap-driven OOM issues. I reduced that risk by separating batch execution responsibility into a dedicated Spring Batch Job-Step application.

For real-time inflow analytics, I built a Kafka-based pipeline. Because the frontend messages were high-volume oversized JSON strings, I removed unnecessary parsing and extracted only the required values with substring and regex, improving both throughput and CPU usage before sending the refined data to Elasticsearch.

On the frontend side, I stabilized a complex analytics calendar by comparing Jotai and Context API, and chose Context API because page-owned state and sibling-component coordination mattered more than file-scoped global atoms in this case.

2024
Gmarket Gmarket
Gmarket Icon

Reorganization of ESM inquiry page

Renewed the emergency message and inquiry page for CS processing created in 2008.

Significantly increased the response rate by more than 5% by adding a multi-response function. Improved user experience and data stability by improving the message transmission method in the front end and introducing a temporary storage logic using browser cache storage.

Reduced the query speed from up to 1 minute to 0.3 seconds (approximately 200x performance improvement) by improving the processing method. Optimized query performance by processing multiple seller information into a single table with OPENJSON instead of the existing double FOR LOOP and multiple SP calls. Performance comparison video

Increased speed by optimizing queries using ISNULL(SUM(CASE WHEN ...)) method instead of GROUP BY in the backend. Example: Improved aggregation queries by type such as delivery, return, and cancellation.

Introduced integration testing based on .http files for test efficiency, improving development and maintenance productivity.

Gmarket Icon

Databricks migration of product page real-time inflow count aggregation process

Changed the workflow to read and aggregate Databricks data loaded as a model for the company's requests.

Able to stop 12 Kafka consumer apps, Elasticsearch (TPS 77, 12GB), and 4 Spring Batch apps, reducing annual job costs to approximately 300,000 won.

Gmarket Icon

SSG.com seller linkage and customer inquiry linkage

Built an environment for large-scale synchronization of seller and customer inquiry data between SSG.com and Gmarket. Implemented complex logic to manage product linkage depending on seller grade, type, and status changes.

Problem: Seller status (grade/type/status) changes in multiple domains, making synchronization detection difficult. To solve this problem, MsSQL Trigger was introduced.

When a seller consents or status changes, seller status linkage to SSG and product linkage status to the Item team are delivered in Batch (once a day).

Performed independently although resignation of 2 project team members, accomplished the due development date and maintained 3 or fewer failures for 3 months.

Gmarket Icon

Building PricingTF - third-party product matching and price analysis system

Participated in initial planning and outsourcing selection of data flow to gain price competitiveness against external competitors.

Proposed and developed a new UX to improve the existing manual matching method (manual distribution between workers, then searching by product number and matching competitor products one by one). The proposed method is designed to have the system automatically select the product number and match competitor products from other companies, improving matching work speed by 3x.

Specifically, to prevent duplicate work by locking the moment of reading using findAndModify after setting it to be readable after 5 minutes by adding a datetime column to MongoDB. Admin development for outsourcing matching companies with .NET and Vue.

2025
Gmarket Gmarket
Gmarket Icon

Build a mapping pipeline for third-party search page product lists

Developed a product data processing pipeline using Databricks in collaboration with the AI team. After the AI team extracts keywords based on our product data, a TSV list with duplicates removed by explode is generated through Spark.

Data is delivered to an outsourcing company, and when the outsourcing company returns a list of products collected from the search page for a day, the AI team matches it with our product name. Implemented a process to load the final result into MongoDB.

Introduced JWT authentication with an expiration date to strengthen security with outsourcing companies when transferring files.

Gmarket Icon

Built a Chrome Extension for outsourced data collection, product matching support, and automated distribution

Manual product matching by outsourced operators had become the main bottleneck because they had to copy and paste URLs, prices, keywords, and other fields into the admin one by one. I kept the existing admin backend but rebuilt the frontend workflow as a Chrome Extension.

After that change, an operator only needed to click a matching button and the extension would automatically collect the link, keyword, price, product name, image link, and related fields together.

I also connected AI-team APIs to suggest search keywords and pre-filter candidate products with name-similarity scoring. To reduce blocks from some external sites, I automated a browser-like navigation flow that started from the search page instead of repeatedly entering direct links.

For distribution, I moved away from manual developer-mode installation and introduced update.xml-based automatic updates with packaged .crx delivery, reducing source exposure while making hotfixes and updates much easier.

Gmarket Icon

Built the pricing back-office system and standardized it as a reusable admin template

The pricing workflow had previously been managed through spreadsheets, which had limits in data consistency, access control, and history management. I redesigned it as an actual internal decision-support back-office system instead of a simple screen migration.

On top of the existing .NET admin environment, I built a standardized Spring Boot + Thymeleaf admin template so that new admin systems could be created quickly by changing configuration and domain values, rather than rebuilding login, permissions, headers, and menu structure each time.

For data access, I designed a query-oriented architecture with direct Databricks JDBC integration. Large CSV downloads were implemented with streaming instead of server-side file generation, minimizing memory usage and making chunked or batched handling possible when needed.

The focus was not just feature delivery, but building a reusable operating structure that other services could adopt with minimal additional work.

Knowledge Sharing

Knowledge Sharing & Community

Lotte e-Commerce technical blog contribution and online presentation

The achievement was selected as the first article on the company's technical blog and recorded, and shared in an online talk format by participating as a three-person presenter in June 2022: Blog Post

Activities as chief editor of the Gmarket Tech Blog

Introduced Git-based PR workflow to simplify the contribution process and improve spelling and correction work efficiency.

Increased the number of visits from an average of 10 units per day to more than 500 by providing external site RSS and renewing the UI. Achieved a total of 450,000 visits.

Improved user experience by resolving CORS conflict issues, managing spam comments, and removing secret post options.

List of blog posts directly contributed:

Company-wide in-house learning session (SLS) presentations

I gave two presentations at the Staff Learning Session (SLS), where employees participate as speakers. The first focused on zero-cost AI automation, covering free AI tools, ways to run programs without servers, free code editors, and a live Grok-based Chrome Extension demo.

The second focused on static and dynamic crawling, the six stages of AI evolution, and a live demo of an AI Browser Agent built around MCP (Model Context Protocol) to discuss next-generation web automation and workflow efficiency.

Industry practitioner lectures for job seekers and project judging / 2025.05 ~ Present

I have delivered career-focused lectures for job seekers and served as a project judge seven times, focusing less on result-listing and more on explaining problem framing, decision rationale, and operating criteria.

I also covered presentation structure and Q&A handling so participants could explain their technical choices in a clear and defensible way.

Public-sector digital classes for K-12 students / 2025.08 ~ Present

I designed hands-on classes using Entry, Python, and Java so students could move characters, draw, compose, and build things instead of stopping at syntax explanations.

After introducing the roles of HTML, CSS, and JavaScript, I guided students to build their own webpages with AI assistance and share the results for peer feedback.

Project mentoring (3 programs, incl. government-supported track) / 2025.06 ~ Apr 2026 (scheduled)

I completed two education-program project mentoring engagements, and I am scheduled to join a government-supported mentoring program starting in April 2026.

Feedback focused on service design, performance bottlenecks, data flow, and presentation/Q&A structure.

Hackathon: Interest-based recommendation system through pre-query search bar

Participated as a one-person team. Planned, presented, and developed all.

Built a prototype that asked a short pre-query question before search input and used the response to infer an anonymous user's likely interests and buying context, then connected that signal to the recommendation flow.

Although it did not make the final rankings, it was selected as a patent review candidate.

Demo prototype site

Patent

Traffic-signal control system for preventing intersection gridlock / Patent application

Application No.: 10-2019-0094108

Proposed a software-based control system that analyzes intersection CCTV footage with OpenCV and YOLO, detects sustained congestion in a target zone, and triggers a red-light transition when thresholds are exceeded.

Core Stack

Tools I Reach For

Java JavaScript Python React MongoDB AWS Node.js Databricks Vue.js .NET Spring Linux MSSQL Kotlin