IMPACT SCORE JOURNAL RANKING CONFERENCE RANKING Conferences Journals Workshops Seminars SYMPOSIUMS MEETINGS BLOG LaTeX 5G Tutorial Free Tools


AITesting 2019: IEEE International Conference On Artificial Intelligence Testing - Call for paper, ranking, acceptance rate, submission deadline, notification date, conference location, submission guidelines, and other important details


This article provides the call for paper, ranking, acceptance rate, submission deadline, notification date, conference location, submission guidelines, and other important details of AITesting 2019: IEEE International Conference On Artificial Intelligence Testing all at one place.

Conference Location San Francisco, California, USA
Conference Date 2019-04-04
Notification Date 2019-02-15
Submission Deadline 2018-12-07
Conference Website and Submission Link http://www.ieeeaitests.com/index.html


Conference Ranking


IEEE International Conference On Artificial Intelligence Testing ranking based on CCF, Core, and Qualis is shown below:

CCF Ranking
Core Ranking
Qualis Ranking

Click here to check the ranking of any conference.
  • About CCF Ranking: The Chinese Computing Federation (CCF) Ranking provides a ranking of peer-reviewed journals and conferences in the field of computer science.

  • About Core Ranking: The CORE Conference Ranking is a measure to assess the major conference in the computing field. This ranking is governed by the CORE Executive Committee. To know more about Core ranking, visit Core ranking portal.

  • About Qualis Ranking: This conference ranking is published by the Brazilian ministry of education. It uses the h-index as a performance metric to rank conferences. Conferences are classified into performance groups that range from A1 (to the best), A2, B1, B2,..., B5 (to the wost). To know more about qualis ranking, visit here

Conference Acceptance Rate


Below is the acceptance rate of IEEE International Conference On Artificial Intelligence Testing conference for the last few years:

Year Submitted Papers Accepted Papers Accepted Percentage/Acceptance Rate

We are working hard to collect and update the acceptance rate details of the conferences for recent years. However, you can consider the above (if available) acceptance rates to predict the average chances of acceptance of your research paper at this conference.



Conference Call for paper


Artificial Intelligence (AI) technologies are widely used in computer applications to perform tasks such as monitoring, forecasting, recommending, prediction, and statistical reporting. They are deployed in a variety of systems including driverless vehicles, robot controlled warehouses, financial forecasting applications, and security enforcement and are increasingly integrated with cloud/fog/edge computing, big data analytics, robotics, Internet-of-Things, mobile computing, smart cities, smart homes, intelligent healthcare, etc. However, the quality assurance of existing AI application development processes is still far from satisfactory and the demand for being able to show demonstrable levels of confidence in such systems is growing. Software testing is a fundamental, effective and recognized quality assurance method which has shown its cost-effectiveness to ensure the reliability of many complex software-systems. However,the adaptation of software testing to the peculiarities of AI applications remains largely unexplored and needs extensive research to be performed. On the other hand, the availability of AI technologies provides an exciting opportunity to improve existing software testing processes, and recent years have shown that machine learning, data mining, knowledge representation, constraint optimization, planning, scheduling, multi-agent systems, etc. have real potential to positively impact on software testing. Recent years have seen a rapid growth of interests in testing AI applications as well as application of AI techniques to software testing. It is, therefore, timely to provide an international forum for researchers and practitioners to exchange novel research results, to articulate the problems and challenges from practices, to deepen our understanding of the subject area with new theories, methodologies, techniques, processes models, etc., and to improve the practices with new tools and resources. This is the aim of the IEEE conference on AI Testing.Topics of InterestThe conference invites papers of original research on AI testing and reports of the best practices in the industry as well as the challenges in practice and research. Topics of interest include (but are not limited to) the following:
Testing AI applications
Methodologies for testing, verification and validation of AI applications

Process models for testing AI applications and quality assurance activities and procedures

Quality models of AI applications and quality attributes of AI applications, such as correctness, reliability, safety, security, accuracy, precision, comprehensibility, explainability, etc

Whole lifecycle of AI applications, including analysis, esign, development, deployment, operation and evolution
Techniques for testing AI applications

Test case design, test data generation, test prioritization, test reduction, etc

Metrics and measurements of the adequacy of testing AI applications

Test oracle for checking the correctness of AI application on test cases
Tools and environment for automated and semi-automated software testing AI applications for various testing activities and management of testing resources
Specific concerns of software testing with various specific types of AI technologies

Statistical machine learning and data mining

Symbolic machine learning, decision trees and random forests, reinforcement learning

Evolutionary methods and Genetic algorithms

Multi-agent systems

Heuristic search algorithms

Cognitive computing

Knowledge management, expert systems

Automatic reasoning and theorem proving

Constraint Programming and Constraint Optimization

Metaprogramming, high-order functions, high-order logic

Programming by example, programming synthesis, etc
Specific concerns of software testing for various types of AI applications

Computer vision and object recognition in image, audio and video

Personalized recommendation systems, and business intelligence

Driverless vehicles and autonomous robotics

Intelligent diagnostic systems

Decision-making support systems

Prediction and forecast systems

Smart cities, smart home, healthcare, and medicine, etc

Natural language processing, and intelligent human machine interactions, etc
Applications of AI techniques to software testing
Machine learning applications to software testing, such as test case generation, test effectiveness prediction and optimization, test adequacy improvement, test cost reduction, etc
Constraint Programming for test case generation and test suite reduction
Constraint Scheduling and Optimization for test case prioritization and test execution scheduling
Multi-agent systems for testing and test services
Crowdsourcing and swarm intelligence in software testing
Genetic algorithms, search-based techniques and heuristics to optimization of testing
Knowledge-based and expert systems for software testing
Data quality checking for AI applications
Testing and quality assurance for unstructured training data
Automatic validation tool for training unstructured data and big data
Large-scale unstructured data quality certificationPaper SubmissionWe primarily invite submission of research papers, that describe original and significant work, but also papers which reports on case studies and empirical research. Papers must not be accepted for publication, or be under submission to another conference or journal. Each paper will be reviewed by at least three members of the Program Committee, using a single-blind reviewing procedure. All papers must be submitted electronically using the EasyChair conference system in PDF format. Each paper is limited to 8 pages including figures and references using IEEE Computer Society Proceedings (two columns, single-spaced, 10pt font). At least one of the authors of any accepted paper would have to register for the conference and confirm that she/he will present the paper in person.

Submission Deadline


AITesting 2019: IEEE International Conference On Artificial Intelligence Testing submission deadline is 2018-12-07.

Note: It is generally recommended to submit your conference paper on or before the submission deadline. Generally, conferences do not encourage to submit the research paper after the deadline is over. In rare scenarios, conferences extend their deadline. Decision about the extension of the deadline is generally updated on the official conference webpage.


Notification date


Notification date of AITesting 2019: IEEE International Conference On Artificial Intelligence Testing is 2019-02-15.

Note: This is the date on which conference announces the result about acceptance or rejection of submitted papers. If your research paper is accepted, the conference will request you to submit the camera ready version of your research paper by the due date. Due date to submit the camera ready version of the paper is generally posted on the official web page of the conferences or notified to you via. email.


Conference Date


AITesting 2019: IEEE International Conference On Artificial Intelligence Testing will start on 2019-04-04.

Note: This is the date on which the conference starts.


Conference Location


AITesting 2019: IEEE International Conference On Artificial Intelligence Testing will be organized at San Francisco, California, USA. This is the place where the conference is organized and the research paper is to be presented.