Sometime in the last two weeks, Google has quietly changed the terms of service for its Colab users, adding a stipulation that Colab services may no longer be used to train deepfakes.
The first web-archived version from the Internet Archive that features the deepfake ban was captured last Tuesday, the 24th May. The last captured version of the Colab FAQ that does not mention the ban was on the May 14th.
Of the two popular deepfake-creation distributions, deepfacelab (DFL) and Face Swapboth of which are forks of the controversial and anonymous code posted to Reddit in 2017only the more notorious DFL appears to have been directly targeted by the ban. According to deepfake developer ‘chervonij’ at the DFL Discord, running the software in Google Colab now produces a warning:
‘You may be executing code that is disallowed, and this may restrict your ability to use Colab in the future. Please note the prohibited actions specified in our FAQ.’
However, interestingly, the user is currently allowed to continue with the execution of the code.
According to a user in the Discord for rival distribution FaceSwap, that project’s code apparently does not yet trigger the warning, suggesting that code for DeepFaceLab (also the feeding architecture for real-time deepfake streaming implementation deepfacelive), by far the most dominant deepfakes method, has been specifically targeted by Colab.
FaceSwap co-lead developer Matt Tora commented*:
‘I find it very unlikely that Google are doing this for any particular ethical reasons, more that Colab’s raison d’être is for students/data scientists/researchers to be able to run computationally expensive GPU code in an easy and accessible manner, free of charge. However, I suspect that a not insignificant amount of users are exploiting this resource to create deepfake models, at scale, which is both computationally expensive and takes a not insignificant amount of training time to produce results.
‘You could say that Colab leans more to the educational, research side of AI. Executing scripts that require little user input, nor understanding, tends to go counter to this. At Faceswap we try to focus on educating the user in AI and the mechanisms involved, whilst lowering the barrier to entry. We very much encourage ethical use of the software and feel that making these kinds of tools available to a wider audience helps educate people in terms of what is achievable in today’s world, rather than keeping it hidden away for a select few.
‘Unfortunately we cannot control how our tools are ultimately used, nor where they are run. It saddens me that an avenue has been closed for people to experiment with our code, however, in terms of protecting this particular resource to ensure its availability to the actual target audience, I find it understandable.’
Since deepfake training is a VRAM-hungry pursuit, and since the advent of the GPU famine, many deepfakers in recent years have eschewed home training in favor of remote training in Colab, where it’s possible, depending on chance and tier, to train a deepfake model on powerful cards such as the Tesla T4 (16GB VRAM, currently around $2k USD), the V100 (32GB VRAM, around $4k USD), and the mighty A100 (80GB VRAM, MSRP of $32,097.00), among others.
The ban on Colab training seems likely to reduce the pool of deepfakers able to train higher-resolution models, where the input and output images are larger, more suited to high-resolution results, and capable of extracting and reproducing greater facial detail.
Some of the most committed deepfake hobbyists and enthusiasts, according to Discord and forum posts, have invested heavily in local hardware over the last couple of years, in spite of the high prices of GPUs.
However, given the high costs involved, sub-communities have emerged to deal with the challenges of training deepfakes on Colabs, with random GPU allocation the most common complaint since Colab limited the use of higher-end GPUs to free users.
* In private messages on Discord
First published 28th May 2022. Revised 7:28 AM EST, correction of quote typo.