Skip to main content

Write a PREreview

Leveraging Machine Learning for Enhanced Linear Codes Equivalence Computation

Posted
Server
Preprints.org
DOI
10.20944/preprints202510.1322.v1

Equivalence test is a crucial concept in coding theorem, however its computation is bedeviled with bottlenecks such as its runtime complexity as a result of using combinatorial algorithms. To resolve this, the study proposes an equivalence testing framework for Linear code using Graph Neural Network (GNN). The proposed framework was assessed with focus on its effectiveness and generalization capabilities. Across the five runs, the framework yielded models which consistently achieved strong performance metrics, including an average accuracy of 99.576%, F1-score of 99.616%, and a ROC AUC score of 1.000. These results underscore the framework’s robustness and its ability to reliably detect equivalence between linear codes under coordinate permutation. The resulting model was then compared to two established linear code equivalence testing methodologies - namely Support Splitting Algorithm and Canonical Form approach, which further affirm the efficiency of the proposed GNN-based method.

You can write a PREreview of Leveraging Machine Learning for Enhanced Linear Codes Equivalence Computation. A PREreview is a review of a preprint and can vary from a few sentences to a lengthy report, similar to a journal-organized peer-review report.

Before you start

We will ask you to log in with your ORCID iD. If you don’t have an iD, you can create one.

What is an ORCID iD?

An ORCID iD is a unique identifier that distinguishes you from everyone with the same or similar name.

Start now