Professional Experience

  • Present 2020

    Senior Lecturer

    Department of Computer science & Engineering, University of Moratuwa,
    Sri Lanka

  • 2021 2020

    Research Fellow

    LIRNEasia,
    Sri Lanka

  • 2020 2014

    Graduate Research/Teaching Fellow

    University of Oregon, Department of Computer and Information Science,
    USA.

  • 2018 2018

    Givens Associate

    Argonne National Laboratory,
    USA.

  • 2020 2011

    Lecturer

    Department of Computer science & Engineering, University of Moratuwa,
    Sri Lanka

  • 2014 2013

    Researcher

    LIRNEasia,
    Sri Lanka

  • 2014 2013

    Visiting Lecturer

    Northshore College of Business and Technology,
    Sri Lanka

Education

  • Ph.D. 2020

    Ph.D. in Computer & Information Science

    University of Oregon, USA

  • MS 2016

    MS in Computer & Information Science

    University of Oregon, USA

  • BSc2011

    B.Sc Engineering (Hons)in Computer Science & Engineering

    University of Moratuwa, Sri Lanka

Featured Research

Automatic Analysis of App Reviews Using LLMs


Sadeep Gunathilaka, and Nisansa de Silva

Proceedings of the Conference on Agents and Artificial Intelligence, 2025, pp. 828-839,

Large Language Models (LLMs) have shown promise in various natural language processing tasks, but their effectiveness for app review classification to support software evolution remains unexplored. This study evaluates commercial and open-source LLMs for classifying mobile app reviews into bug reports, feature requests, user experiences, and ratings. We compare the zero-shot performance of GPT-3.5 and Gemini Pro 1.0, finding that GPT-3.5 achieves superior results with an F1 score of 0.849. We then use GPT-3.5 to autonomously annotate a dataset for fine-tuning smaller open-source models. Experiments with Llama 2 and Mistral show that instruction fine-tuning significantly improves performance, with results approaching commercial models. We investigate the trade-off between training data size and the number of epochs, demonstrating that comparable results can be achieved with smaller datasets and increased training iterations. Additionally, we explore the impact of different prompting strategies on model performance. Our work demonstrates the potential of LLMs to enhance app review analysis for software engineering while highlighting areas for further improvement in open-source alternatives.