File:Demonstration of inpainting and outpainting using Stable Diffusion (step 2 of 4).png
原始文件 (2,048 × 3,584像素,文件大小:4.21 MB,MIME类型:image/png)
摘要
描述Demonstration of inpainting and outpainting using Stable Diffusion (step 2 of 4).png |
Demonstration of the usage of inpainting and outpainting techniques on algorithmically-generated artworks created using the Stable Diffusion V1-4 AI diffusion model. Not only is Stable Diffusion capable of generating new images from scratch via text prompt, it is also capable of providing guided image synthesis for enhancing existing images, through the use of the model's diffusion-denoising mechanism. This image aims to illustrate the process in which Stable Diffusion can be used to perform both inpainting and outpainting, as one part out of four images showing each step of the procedure.
All artworks created using a single NVIDIA RTX 3090. Front-end used for the entire generation process is Stable Diffusion web UI created by AUTOMATIC1111.
An initial 512x768 image was algorithmically-generated with Stable Diffusion via txt2img using the following prompts:
Then, two passes of the SD upscale script using "Real-ESRGAN 4x plus anime 6B" were run within img2img. The first pass used a tile overlap of 64, denoising strength of 0.3, 50 sampling steps with Euler a, and a CFG scale of 7. The second pass used a tile overlap of 128, denoising strength of 0.1, 10 sampling steps with Euler a, and a CFG scale of 7. This creates our initial 2048x3072 image to begin working with. Unfortunately for her (and fortunately for the purpose of this demonstration), it appears that the AI neglected to give this woman one of her arms.
Using the "Outpainting mk2" script within img2img, the bottom of the image was extended by 512 pixels (via two passes, each pass extending 256 pixels), using 100 sampling steps with Euler a, denoising strength of 0.8, CFG scale of 7.5, mask blur of 4, fall-off exponent value of 1.8, colour variation set to 0.03. The prompts used were identical to those utilised during the first step. This subsequently increases the image's dimensions to 2048x3584, while also revealing the woman's midriff, belly button and skirt, which were previously absent from the original AI-generated image.
In GIMP, I drew a very shoddy attempt at a human arm using the standard paintbrush. This will provide a guide for the AI model to generate a new arm.
Using the inpaint feature for img2img, I drew a mask over the arm drawn in the previous step, along with a portion of the shoulder. The following settings were used for all passes:
An initial pass was run using the following prompts:
This created the arm; another subsequent pass was then done to fine-tune deformations and blemishes around the newly generated arm along the sleeve. Drawing a new mask over the shoulder, the following prompt was used:
The outcome of this pass resulted in the final image. |
日期 | |
来源 | 自己的作品 |
作者 | Benlisquare |
授权 (二次使用本文件) |
As the creator of the output images, I release this image under the licence displayed within the template below.
The Stable Diffusion AI model is released under the CreativeML OpenRAIL-M License, which "does not impose any restrictions on reuse, distribution, commercialization, adaptation" as long as the model is not being intentionally used to cause harm to individuals, for instance, to deliberately mislead or deceive, and the authors of the AI models claim no rights over any image outputs generated, as stipulated by the license.
|
许可协议
- 您可以自由地:
- 共享 – 复制、发行并传播本作品
- 修改 – 改编作品
- 惟须遵守下列条件:
- 署名 – 您必须对作品进行署名,提供授权条款的链接,并说明是否对原始内容进行了更改。您可以用任何合理的方式来署名,但不得以任何方式表明许可人认可您或您的使用。
- 相同方式共享 – 如果您再混合、转换或者基于本作品进行创作,您必须以与原先许可协议相同或相兼容的许可协议分发您贡献的作品。
已授权您依据自由软件基金会发行的无固定段落及封面封底文字(Invariant Sections, Front-Cover Texts, and Back-Cover Texts)的GNU自由文件许可协议1.2版或任意后续版本的条款,复制、传播和/或修改本文件。该协议的副本请见“GNU Free Documentation License”。http://www.gnu.org/copyleft/fdl.htmlGFDLGNU Free Documentation Licensetruetrue |
此文件中描述的项目
描绘内容
某些值没有维基数据项目
27 9 2022
image/png
文件历史
点击某个日期/时间查看对应时刻的文件。
日期/时间 | 缩略图 | 大小 | 用户 | 备注 | |
---|---|---|---|---|---|
当前 | 2022年9月27日 (二) 14:22 | 2,048 × 3,584(4.21 MB) | Benlisquare | {{Information |Description=Demonstration of the usage of inpainting and outpainting techniques on algorithmically-generated artworks created using the [https://github.com/CompVis/stable-diffusion Stable Diffusion V1-4] AI diffusion model. Not only is Stable Diffusion capable of generating new images from scratch via text prompt, it is also capable of providing guided image synthesis for enhancing existing images, through the use of the model's diffusion-denoising mechanism. This image aims t... |
文件用途
以下页面使用本文件:
全域文件用途
以下其他wiki使用此文件:
- en.wikipedia.org上的用途