Professional Experience

  • Present 2020

    Senior Lecturer

    Department of Computer science & Engineering, University of Moratuwa,
    Sri Lanka

  • 2021 2020

    Research Fellow

    LIRNEasia,
    Sri Lanka

  • 2020 2014

    Graduate Research/Teaching Fellow

    University of Oregon, Department of Computer and Information Science,
    USA.

  • 2018 2018

    Givens Associate

    Argonne National Laboratory,
    USA.

  • 2020 2011

    Lecturer

    Department of Computer science & Engineering, University of Moratuwa,
    Sri Lanka

  • 2014 2013

    Researcher

    LIRNEasia,
    Sri Lanka

  • 2014 2013

    Visiting Lecturer

    Northshore College of Business and Technology,
    Sri Lanka

Education

  • Ph.D. 2020

    Ph.D. in Computer & Information Science

    University of Oregon, USA

  • MS 2016

    MS in Computer & Information Science

    University of Oregon, USA

  • BSc2011

    B.Sc Engineering (Hons)in Computer Science & Engineering

    University of Moratuwa, Sri Lanka

Featured Research

Improving the quality of Web-mined Parallel Corpora of Low-Resource Languages using Debiasing Heuristics


A. Fernando, S. Ranathunga, and N. de Silva

arXiv preprint arXiv:2502.19074, 2025,

Parallel Data Curation (PDC) techniques aim to filter out noisy parallel sentences from the web-mined corpora. Prior research has demonstrated that ranking sentence pairs using similarity scores on sentence embeddings derived from Pre-trained Multilingual Language Models (multiPLMs) and training the NMT systems with the top-ranked samples, produces superior NMT performance than when trained using the full dataset. However, previous research has shown that the choice of multiPLM significantly impacts the ranking quality. This paper investigates the reasons behind this disparity across multiPLMs. Using the web-mined corpora CCMatrix and CCAligned for En?Si, En?Ta and Si?Ta, we show that different multiPLMs (LASER3, XLM-R, and LaBSE) are biased towards certain types of sentences, which allows noisy sentences to creep into the top-ranked samples. We show that by employing a series of heuristics, this noise can be removed to a certain extent. This results in improving the results of NMT systems trained with web-mined corpora and reduces the disparity across multiPLMs.