Skip to main content

Write a PREreview

Optimizing Cloudlets for Faster Feedback in LLM-Based Code-Evaluation Systems

Posted
Server
Preprints.org
DOI
10.20944/preprints202511.1744.v1

This paper addresses the challenge of optimizing cloudlet resource allocation in a code evaluation system. The study models the relationship between system load and response time when users submit code to an an online code evaluation platform called LambdaChecker that operates a cloudlet-based processing pipeline. The pipeline includes code correctness checks, static analysis, and design-pattern detection using a local Large Language Model (LLM). To optimize the system we develop a mathematical model and apply it to LambdaChecker resource management. The proposed approach is assessed using both simulations and real contest data, focusing on improvements in average response time, resource-utilization efficiency, and user satisfaction. The results indicate that adaptive scheduling and workload prediction effectively reduce waiting times without substantially increasing operational costs. Overall, the study suggests that systematic cloudlet optimization can enhance the educational value of automated code evaluation systems by improving responsiveness while preserving sustainable resource usage.

You can write a PREreview of Optimizing Cloudlets for Faster Feedback in LLM-Based Code-Evaluation Systems. A PREreview is a review of a preprint and can vary from a few sentences to a lengthy report, similar to a journal-organized peer-review report.

Before you start

We will ask you to log in with your ORCID iD. If you don’t have an iD, you can create one.

What is an ORCID iD?

An ORCID iD is a unique identifier that distinguishes you from everyone with the same or similar name.

Start now