Mostra i principali dati dell'item

dc.creatorDemertzis K., Iliadis L., Kikiras P.en
dc.date.accessioned2023-01-31T07:53:19Z
dc.date.available2023-01-31T07:53:19Z
dc.date.issued2021
dc.identifier10.1007/978-3-030-79157-5_18
dc.identifier.isbn9783030791568
dc.identifier.issn18684238
dc.identifier.urihttp://hdl.handle.net/11615/73199
dc.description.abstractEvery learning algorithm, has a specific bias. This may be due to the choice of its hyperparameters, to the characteristics of its classification methodology, or even to the representation approach of the considered information. As a result, Machine Learning modeling algorithms are vulnerable to specialized attacks. Moreover, the training datasets are not always an accurate image of the real world. Their selection process and the assumption that they have the same distribution as all the unknown cases, introduce another level of bias. Global and Local Interpretability (GLI) is a very important process that allows the determination of the right architectures to solve Adversarial Attacks (ADA). It contributes towards a holistic view of the Intelligent Model, through which we can determine the most important features, we can understand the way the decisions are made and the interactions between the involved features. This research paper, introduces the innovative hybrid Lipschitz - Shapley approach for Explainable Defence Against Adversarial Attacks. The introduced methodology, employs the Lipschitz constant and it determines its evolution during the training process of the intelligent model. The use of the Shapley Values, offers clear explanations for the specific decisions made by the model. © 2021, IFIP International Federation for Information Processing.en
dc.language.isoenen
dc.sourceIFIP Advances in Information and Communication Technologyen
dc.source.urihttps://www.scopus.com/inward/record.uri?eid=2-s2.0-85112656049&doi=10.1007%2f978-3-030-79157-5_18&partnerID=40&md5=d6d5f35fcfa4798393a134300bdaedfb
dc.subjectBiomedical engineeringen
dc.subjectClassification (of information)en
dc.subjectEnergy efficiencyen
dc.subjectMachine learningen
dc.subjectNetwork securityen
dc.subjectClassification methodologiesen
dc.subjectImportant featuresen
dc.subjectIntelligent modelingen
dc.subjectInterpretabilityen
dc.subjectLipschitz constanten
dc.subjectMachine learning modelsen
dc.subjectTraining data setsen
dc.subjectTraining processen
dc.subjectLearning algorithmsen
dc.subjectSpringer Science and Business Media Deutschland GmbHen
dc.titleA Lipschitz - Shapley Explainable Defense Methodology Against Adversarial Attacksen
dc.typeconferenceItemen


Files in questo item

FilesDimensioneFormatoMostra

Nessun files in questo item.

Questo item appare nelle seguenti collezioni

Mostra i principali dati dell'item