Networkers HomeInterview Questions
All topics  ›  AI Cybersecurity  ›  Adversarial ML
Adversarial ML · AI Cybersecurity

Explain FGSM (Fast Gradient Sign Method) and how it bypasses ML classifiers.

FGSM (Goodfellow 2014) — generates adversarial example by adding small perturbation in direction of gradient sign of loss function. Math: x_adv = x + ε · sign(∇_x J(θ, x, y)) where ε is perturbation magnitude. Result: tiny modification to input causes classifier to misclassify with high confidence. Real-world impact: malware classifier marks malware as benign with single-byte changes; image classifier confidently mislabels stop sign as 100mph speed limit. Defence: adversarial training (train on perturbed examples), randomised smoothing, certified defences.
Want the full explanation? This is the atomic answer suitable for quick interview prep. For the structured deep-dive — including code samples, strong-answer vs weak-answer notes, common follow-up questions, and how this fits the larger ai cybersecurity topic — see the full Q&A on Networkers Home:

→ AI Cybersecurity Interview Hub — Full Q&A with deep context

How Networkers Home prepares students for this kind of question

This question reflects real interview rounds at Bangalore's top product, BFSI, and GCC cybersecurity teams. Networkers Home's flagship courses include mock interview sessions drilling exactly these question patterns, with feedback from interviewers who have hired for the role.

→ View the complete ai cybersecurity interview prep hub
→ View the related Networkers Home course
→ Book a free career consultation

Related Adversarial ML questions

Adversarial ML

Q. How would you red-team a fraud detection ML model?

Methodology: (1) Reconnaissance — identify model type, training data, features; (2) Black-box probing — submit transactions across normal/borderline/extreme ranges, observe decisions; (3) Membership inference — determine…
Read full answer →