LTX-Video: Initial commit

This commit is contained in:
Yoav HaCohen
2024-11-21 16:55:32 +02:00
commit 5a592aa5b1
47 changed files with 7441 additions and 0 deletions

4
.gitattributes vendored Normal file
View File

@@ -0,0 +1,4 @@
*.jpg filter=lfs diff=lfs merge=lfs -text
*.jpeg filter=lfs diff=lfs merge=lfs -text
*.png filter=lfs diff=lfs merge=lfs -text
*.gif filter=lfs diff=lfs merge=lfs -text

27
.github/workflows/pylint.yml vendored Normal file
View File

@@ -0,0 +1,27 @@
name: Ruff
on: [push]
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.10"]
steps:
- name: Checkout repository and submodules
uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v3
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install ruff==0.2.2 black==24.2.0
- name: Analyzing the code with ruff
run: |
ruff $(git ls-files '*.py')
- name: Verify that no Black changes are required
run: |
black --check $(git ls-files '*.py')

165
.gitignore vendored Normal file
View File

@@ -0,0 +1,165 @@
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
cover/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
.pybuilder/
target/
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# pyenv
# For a library or package, you might want to ignore these files since the code is
# intended to run in multiple environments; otherwise, check them in:
# .python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock
# poetry
# Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
# This is especially recommended for binary packages to ensure reproducibility, and is more
# commonly ignored for libraries.
# https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
#poetry.lock
# pdm
# Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
#pdm.lock
# pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
# in version control.
# https://pdm.fming.dev/latest/usage/project/#working-with-version-control
.pdm.toml
.pdm-python
.pdm-build/
# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
__pypackages__/
# Celery stuff
celerybeat-schedule
celerybeat.pid
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
# pytype static type analyzer
.pytype/
# Cython debug symbols
cython_debug/
# PyCharm
# JetBrains specific template is maintained in a separate JetBrains.gitignore that can
# be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
# and can be added to the global gitignore or merged into this file. For a more nuclear
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
.idea/
# From inference.py
video_output_*.mp4

16
.pre-commit-config.yaml Normal file
View File

@@ -0,0 +1,16 @@
repos:
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.2.2
hooks:
# Run the linter.
- id: ruff
args: [--fix] # Automatically fix issues if possible.
types: [python] # Ensure it only runs on .py files.
- repo: https://github.com/psf/black
rev: 24.2.0 # Specify the version of Black you want
hooks:
- id: black
name: Black code formatter
language_version: python3 # Use the Python version you're targeting (e.g., 3.10)

68
LICENSE Normal file
View File

@@ -0,0 +1,68 @@
LTX Video 0.9 (“LTXV”)
By Lightricks Ltd. (“Lightricks”)
RAIL-M License
dated November 22, 2024
Section I: PREAMBLE
Multimodal generative models are being widely adopted and used, and have the potential to transform the way artists, among other individuals, conceive and benefit from artificial intelligence (“AI”) or ML technologies as a tool for content creation. Notwithstanding the current and potential benefits that these artifacts can bring to society at large, there are also concerns about potential misuses of them, either due to their technical limitations or ethical considerations.
The development and use of AI does not come without concerns. The world has witnessed how AI techniques may, in some instances, become risky for the public in general. These risks come in many forms, from racial discrimination to the misuse of sensitive information.
This RAIL-M License is generally applicable to any machine-learning Model (as defined below). The “RAIL” nomenclature indicates that there are use restrictions prohibiting the use of the Model. These restrictions are intended to avoid potential misuse by not permitting the use of the Model in very specific scenarios, in order for the licensor to be able to enforce the license in case a potential misuse of the Model may occur. Even though derivative versions of the Model could be released under different licensing terms, the License (as defined below) specifies that the use restrictions in the original License must apply to such derivative versions. This License governs the use of the Model (and its derivatives) and is informed by the model card associated with the Model.
NOW THEREFORE, You and Licensor agree as follows:
1. Definitions
1. “Complementary Material” means the applicable source code and scripts used to define, run, load, benchmark or evaluate the Model, and used to prepare data for training or evaluation, if any. This includes any accompanying documentation, tutorials, examples, etc, if any. Complementary Material is not licensed under this License.
2. “Contribution” means any work of authorship, including the original version of the Model and any modifications or additions to that Model or Derivatives of the Model thereof, that is intentionally submitted to Licensor for inclusion in the Model by the rights owner or by an individual or legal entity authorized to submit on behalf of the rights owner. For the purposes of this definition, “submitted” means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Model, but excluding communication that is conspicuously marked or otherwise designated in writing by the rights owner as “Not a Contribution.”
3. “Contributor” means Licensor and any individual or legal entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Model.
4. “Data” means a collection of information and/or content extracted from the dataset used with the Model, including to train, pretrain, or otherwise evaluate the Model. The Data is not licensed under this License.
5. “Derivatives of the Model” means all modifications to the Model, works based on the Model, or any other model which is created or initialized by transfer of patterns of the weights, parameters, activations or output of the Model, to the other model, in order to cause the other model to perform similarly to the Model, including but not limited to distillation methods entailing the use of intermediate data representations or methods based on the generation of synthetic data by the Model for training the other model.
6. “Distribution” means any transmission, reproduction, publication or other sharing of the Model or Derivatives of the Model to a third party, including providing the Model as a hosted service made available by electronic or other remote means e.g. API-based or web access.
7. “Harm” includes but is not limited to physical, mental, psychological, financial and reputational damage, pain, or loss.
8. “License” means the terms and conditions for use, reproduction, and Distribution as defined in this document.
9. “Licensor” means the rights owner or entity authorized by the rights owner that is granting the License, including the persons or entities that may have rights in the Model and/or distributing the Model. For the purposes of this License, the Licensor is Lightricks Ltd.
10. “Model” means any accompanying machine-learning based assemblies (including checkpoints), consisting of learnt weights, parameters (including optimizer states), corresponding to the Lightricks Model “LTX Video 0.9” model architecture as embodied in the Complementary Material, that have been trained or tuned, in whole or in part on the Data, using the Complementary Material.
11. ”Output” means the results of operating a Model as embodied in informational content resulting therefrom.
12. “Permitted Purpose” means for academic or research purposes only, and explicitly excludes commercialization such as downstream selling of the Model or Derivatives of the Model.
13. “Third Parties” means individuals or legal entities that are not under common control with Licensor or You.
14. “You” (or “Your”) means an individual or legal entity exercising permissions granted by this License and/or making use of the Model for whichever purpose and in any field of use, including usage of the Model in an end-use application e.g. chatbot, translator, image generator.
Section II: INTELLECTUAL PROPERTY RIGHTS
Both copyright and patent grants apply to the Model and Derivatives of the Model. The Model and Derivatives of the Model are subject to additional terms as described in Section III, which shall govern the use of the Model and Derivatives of the Model even in the event Section II is held unenforceable.
2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare, publicly display, publicly perform, sublicense, and distribute the Model and Derivatives of the Model, only for the Permitted Purpose.
3. Grant of Patent License. Subject to the terms and conditions of this License and where and as applicable, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this paragraph) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Model and/or Derivatives of the Model, but only for the Permitted Purpose. Such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Model or Derivatives of the Model to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Model or Derivative of the Model and/or a Contribution incorporated within the Model or Derivative of the Model constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for the Model and/or Derivative of the Model shall terminate as of the date such litigation is asserted or filed.
Section III: CONDITIONS OF USAGE, DISTRIBUTION AND REDISTRIBUTION
4. Distribution and Redistribution. You may host for Third Party remote access purposes (e.g. software-as-a-service), reproduce and distribute copies of the Model or Derivatives of the Model thereof in any medium, with or without modifications, provided that You meet the following conditions:
1. Use-based restrictions as referenced in paragraph 5 MUST be included as an enforceable provision by You in any type of legal agreement (e.g. a license) governing the use and/or distribution of the Model or Derivatives of the Model, and You shall give notice to subsequent users You Distribute to, that the Model or Derivatives of the Model are subject to paragraph 5.
2. You must give any Third Party recipients of the Model or Derivatives of the Model a copy of this License;
3. You must cause any modified files to carry prominent notices stating that You changed the files;
4. You must retain all copyright, patent, trademark, and attribution notices excluding those notices that do not pertain to any part of the Model, Derivatives of the Model.
5. You and any Third Party recipients of the Model or Derivatives of the Model must adhere to the Permitted Purpose.
You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions respecting paragraph 4.1. for use, reproduction, or Distribution of Your modifications, or for any such Derivatives of the Model as a whole, provided Your use, reproduction, and Distribution of the Model otherwise complies with the conditions stated in this License.
5. Use-based restrictions. The restrictions set forth in Attachment A are considered Use-based restrictions. Therefore You cannot use the Model or the Derivatives of the Model in violation of such restrictions. You may use the Model subject to this License, including only for lawful purposes and in accordance with the License. “Use” may include creating any content with, fine-tuning, updating, running, training, evaluating and/or re-parametrizing the Model. You shall require all of Your users who use the Model or a Derivative of the Model to comply with the terms of this paragraph 5.
6. The Output You Generate. Except as set forth herein, Licensor claims no rights in the Output You generate using the Model. You are accountable for the input you insert into the Model, the Output you generate and its subsequent uses. No use of the Output can contravene any provision as stated in the License.
Section IV: OTHER PROVISIONS
7. Updates and Runtime Restrictions. To the maximum extent permitted by law, Licensor reserves the right to restrict (remotely or otherwise) usage of the Model in violation of this License, update the Model through electronic means, or modify the Output of the Model based on updates. You shall undertake reasonable efforts to use the latest version of the Model. Any use of the non-current version of the Model is done solely at your risk.
8. Trademarks and related. Nothing in this License permits You to make use of Licensors trademarks, trade names, logos or to otherwise suggest endorsement or misrepresent the relationship between the parties; and any rights not expressly granted herein are reserved by the Licensor.
9. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Model (and each Contributor provides its Contributions) on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Model and Derivatives of the Model, and assume any risks associated with Your exercise of permissions under this License.
10. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Model (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages.
11. Accepting Warranty or Additional Liability. While redistributing the Model or Derivatives of the Model, You may choose to charge a fee in exchange for support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against such Contributor, by reason of your accepting any such warranty or additional liability.
12. If any provision of this License is held to be invalid, illegal or unenforceable, the remaining provisions shall be unaffected thereby and remain valid as if such provision had not been set forth herein.
END OF TERMS AND CONDITIONS
________________
Attachment A
Use Restrictions
You agree not to use the Model or its Derivatives in any of the following ways:
1. Outside of the Permitted Purpose;
2. In any way that violates any applicable national, federal, state, local or international law or regulation.
3. For the purpose of exploiting, Harming or attempting to exploit or Harm minors in any way;
4. To generate or disseminate false information and/or content with the purpose of Harming others;
5. To generate or disseminate personal identifiable information that can be used to Harm an individual;
6. To generate or disseminate information and/or content (e.g. images, code, posts, articles), and place the information and/or content in any context (e.g. bot generating tweets) without expressly and intelligibly disclaiming that the information and/or content is machine generated;
7. To defame, disparage or otherwise harass others;
8. To impersonate or attempt to impersonate (e.g. deepfakes) others without their consent;
9. For fully automated decision making that adversely impacts an individuals legal rights or otherwise creates or modifies a binding, enforceable obligation;
10. For any use intended to or which has the effect of discriminating against or Harming individuals or groups based on online or offline social behavior or known or predicted personal or personality characteristics;
11. To exploit any of the vulnerabilities of a specific group of persons based on their age, social, physical or mental characteristics, in order to materially distort the behavior of a person belonging to that group in a manner that causes or is likely to cause that person or another person Harm;
12. For any use intended to or which has the effect of discriminating against individuals or groups based on legally protected characteristics or categories;
13. To generate or disseminate information for the purpose to be used for administration of justice, law enforcement, immigration or asylum processes, such as predicting an individual will commit fraud/crime commitment (e.g. by text profiling, drawing causal relationships between assertions made in documents, indiscriminate and arbitrarily-targeted use);
14. To generate and/or disseminate malware (including but not limited to ransomware) or any other content to be used for the purpose of harming electronic systems;
15. To engage in, promote, incite, or facilitate discrimination or other unlawful of harmful conduct in the provision of employment, employment benefits, credit, housing, or other essential goods and services;
16. To engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals.

97
README.md Normal file
View File

@@ -0,0 +1,97 @@
<div align="center">
# LTX-Video
This is the official repository for LTX-Video.
[Website](https://www.lightricks.com/ltxv) |
[Model](https://huggingface.co/Lightricks/LTX-Video) |
[Demo](https://fal.ai/models/fal-ai/ltx-video) |
[Paper (Soon)](https://github.com/Lightricks/LTX-Video)
</div>
## Table of Contents
* [Introduction](#introduction)
* [Quick start guide](#quick-start-guide)
* [Installation](#installation)
* [Inference](#inference)
* [ComfyUI Integration](#comfyui-integration)
* [Model User guide](#model-user-guide)
* [Acknowledgement](#acknowledgement)
# Introduction
LTX-Video is the first DiT-based video generation model that can generate high-quality videos in *real-time*.
It can generate 24 FPS videos at 768x512 resolution, faster than it takes to watch them.
The model is trained on a large-scale dataset of diverse videos and can generate high-resolution videos
with realistic and diverse content.
| | | | |
|:---:|:---:|:---:|:---:|
| ![example1](./docs/_static/ltx-video_example_00001.gif)<br><details style="max-width: 300px; margin: auto;"><summary>A woman with long brown hair and light skin smiles at another woman...</summary>A woman with long brown hair and light skin smiles at another woman with long blonde hair. The woman with brown hair wears a black jacket and has a small, barely noticeable mole on her right cheek. The camera angle is a close-up, focused on the woman with brown hair's face. The lighting is warm and natural, likely from the setting sun, casting a soft glow on the scene. The scene appears to be real-life footage.</details> | ![example2](./docs/_static/ltx-video_example_00002.gif)<br><details style="max-width: 300px; margin: auto;"><summary>A woman walks away from a white Jeep parked on a city street at night...</summary>A woman walks away from a white Jeep parked on a city street at night, then ascends a staircase and knocks on a door. The woman, wearing a dark jacket and jeans, walks away from the Jeep parked on the left side of the street, her back to the camera; she walks at a steady pace, her arms swinging slightly by her sides; the street is dimly lit, with streetlights casting pools of light on the wet pavement; a man in a dark jacket and jeans walks past the Jeep in the opposite direction; the camera follows the woman from behind as she walks up a set of stairs towards a building with a green door; she reaches the top of the stairs and turns left, continuing to walk towards the building; she reaches the door and knocks on it with her right hand; the camera remains stationary, focused on the doorway; the scene is captured in real-life footage.</details> | ![example3](./docs/_static/ltx-video_example_00003.gif)<br><details style="max-width: 300px; margin: auto;"><summary>A woman with blonde hair styled up, wearing a black dress...</summary>A woman with blonde hair styled up, wearing a black dress with sequins and pearl earrings, looks down with a sad expression on her face. The camera remains stationary, focused on the woman's face. The lighting is dim, casting soft shadows on her face. The scene appears to be from a movie or TV show.</details> | ![example4](./docs/_static/ltx-video_example_00004.gif)<br><details style="max-width: 300px; margin: auto;"><summary>The camera pans over a snow-covered mountain range...</summary>The camera pans over a snow-covered mountain range, revealing a vast expanse of snow-capped peaks and valleys.The mountains are covered in a thick layer of snow, with some areas appearing almost white while others have a slightly darker, almost grayish hue. The peaks are jagged and irregular, with some rising sharply into the sky while others are more rounded. The valleys are deep and narrow, with steep slopes that are also covered in snow. The trees in the foreground are mostly bare, with only a few leaves remaining on their branches. The sky is overcast, with thick clouds obscuring the sun. The overall impression is one of peace and tranquility, with the snow-covered mountains standing as a testament to the power and beauty of nature.</details> |
| ![example5](./docs/_static/ltx-video_example_00005.gif)<br><details style="max-width: 300px; margin: auto;"><summary>A woman with light skin, wearing a blue jacket and a black hat...</summary>A woman with light skin, wearing a blue jacket and a black hat with a veil, looks down and to her right, then back up as she speaks; she has brown hair styled in an updo, light brown eyebrows, and is wearing a white collared shirt under her jacket; the camera remains stationary on her face as she speaks; the background is out of focus, but shows trees and people in period clothing; the scene is captured in real-life footage.</details> | ![example6](./docs/_static/ltx-video_example_00006.gif)<br><details style="max-width: 300px; margin: auto;"><summary>A man in a dimly lit room talks on a vintage telephone...</summary>A man in a dimly lit room talks on a vintage telephone, hangs up, and looks down with a sad expression. He holds the black rotary phone to his right ear with his right hand, his left hand holding a rocks glass with amber liquid. He wears a brown suit jacket over a white shirt, and a gold ring on his left ring finger. His short hair is neatly combed, and he has light skin with visible wrinkles around his eyes. The camera remains stationary, focused on his face and upper body. The room is dark, lit only by a warm light source off-screen to the left, casting shadows on the wall behind him. The scene appears to be from a movie.</details> | ![example7](./docs/_static/ltx-video_example_00007.gif)<br><details style="max-width: 300px; margin: auto;"><summary>A prison guard unlocks and opens a cell door...</summary>A prison guard unlocks and opens a cell door to reveal a young man sitting at a table with a woman. The guard, wearing a dark blue uniform with a badge on his left chest, unlocks the cell door with a key held in his right hand and pulls it open; he has short brown hair, light skin, and a neutral expression. The young man, wearing a black and white striped shirt, sits at a table covered with a white tablecloth, facing the woman; he has short brown hair, light skin, and a neutral expression. The woman, wearing a dark blue shirt, sits opposite the young man, her face turned towards him; she has short blonde hair and light skin. The camera remains stationary, capturing the scene from a medium distance, positioned slightly to the right of the guard. The room is dimly lit, with a single light fixture illuminating the table and the two figures. The walls are made of large, grey concrete blocks, and a metal door is visible in the background. The scene is captured in real-life footage.</details> | ![example8](./docs/_static/ltx-video_example_00008.gif)<br><details style="max-width: 300px; margin: auto;"><summary>A woman with blood on her face and a white tank top...</summary>A woman with blood on her face and a white tank top looks down and to her right, then back up as she speaks. She has dark hair pulled back, light skin, and her face and chest are covered in blood. The camera angle is a close-up, focused on the woman's face and upper torso. The lighting is dim and blue-toned, creating a somber and intense atmosphere. The scene appears to be from a movie or TV show.</details> |
| ![example9](./docs/_static/ltx-video_example_00009.gif)<br><details style="max-width: 300px; margin: auto;"><summary>A man with graying hair, a beard, and a gray shirt...</summary>A man with graying hair, a beard, and a gray shirt looks down and to his right, then turns his head to the left. The camera angle is a close-up, focused on the man's face. The lighting is dim, with a greenish tint. The scene appears to be real-life footage. Step</details> | ![example10](./docs/_static/ltx-video_example_00010.gif)<br><details style="max-width: 300px; margin: auto;"><summary>A clear, turquoise river flows through a rocky canyon...</summary>A clear, turquoise river flows through a rocky canyon, cascading over a small waterfall and forming a pool of water at the bottom.The river is the main focus of the scene, with its clear water reflecting the surrounding trees and rocks. The canyon walls are steep and rocky, with some vegetation growing on them. The trees are mostly pine trees, with their green needles contrasting with the brown and gray rocks. The overall tone of the scene is one of peace and tranquility.</details> | ![example11](./docs/_static/ltx-video_example_00011.gif)<br><details style="max-width: 300px; margin: auto;"><summary>A man in a suit enters a room and speaks to two women...</summary>A man in a suit enters a room and speaks to two women sitting on a couch. The man, wearing a dark suit with a gold tie, enters the room from the left and walks towards the center of the frame. He has short gray hair, light skin, and a serious expression. He places his right hand on the back of a chair as he approaches the couch. Two women are seated on a light-colored couch in the background. The woman on the left wears a light blue sweater and has short blonde hair. The woman on the right wears a white sweater and has short blonde hair. The camera remains stationary, focusing on the man as he enters the room. The room is brightly lit, with warm tones reflecting off the walls and furniture. The scene appears to be from a film or television show.</details> | ![example12](./docs/_static/ltx-video_example_00012.gif)<br><details style="max-width: 300px; margin: auto;"><summary>The waves crash against the jagged rocks of the shoreline...</summary>The waves crash against the jagged rocks of the shoreline, sending spray high into the air.The rocks are a dark gray color, with sharp edges and deep crevices. The water is a clear blue-green, with white foam where the waves break against the rocks. The sky is a light gray, with a few white clouds dotting the horizon.</details> |
| ![example13](./docs/_static/ltx-video_example_00013.gif)<br><details style="max-width: 300px; margin: auto;"><summary>The camera pans across a cityscape of tall buildings...</summary>The camera pans across a cityscape of tall buildings with a circular building in the center. The camera moves from left to right, showing the tops of the buildings and the circular building in the center. The buildings are various shades of gray and white, and the circular building has a green roof. The camera angle is high, looking down at the city. The lighting is bright, with the sun shining from the upper left, casting shadows from the buildings. The scene is computer-generated imagery.</details> | ![example14](./docs/_static/ltx-video_example_00014.gif)<br><details style="max-width: 300px; margin: auto;"><summary>A man walks towards a window, looks out, and then turns around...</summary>A man walks towards a window, looks out, and then turns around. He has short, dark hair, dark skin, and is wearing a brown coat over a red and gray scarf. He walks from left to right towards a window, his gaze fixed on something outside. The camera follows him from behind at a medium distance. The room is brightly lit, with white walls and a large window covered by a white curtain. As he approaches the window, he turns his head slightly to the left, then back to the right. He then turns his entire body to the right, facing the window. The camera remains stationary as he stands in front of the window. The scene is captured in real-life footage.</details> | ![example15](./docs/_static/ltx-video_example_00015.gif)<br><details style="max-width: 300px; margin: auto;"><summary>Two police officers in dark blue uniforms and matching hats...</summary>Two police officers in dark blue uniforms and matching hats enter a dimly lit room through a doorway on the left side of the frame. The first officer, with short brown hair and a mustache, steps inside first, followed by his partner, who has a shaved head and a goatee. Both officers have serious expressions and maintain a steady pace as they move deeper into the room. The camera remains stationary, capturing them from a slightly low angle as they enter. The room has exposed brick walls and a corrugated metal ceiling, with a barred window visible in the background. The lighting is low-key, casting shadows on the officers' faces and emphasizing the grim atmosphere. The scene appears to be from a film or television show.</details> | ![example16](./docs/_static/ltx-video_example_00016.gif)<br><details style="max-width: 300px; margin: auto;"><summary>A woman with short brown hair, wearing a maroon sleeveless top...</summary>A woman with short brown hair, wearing a maroon sleeveless top and a silver necklace, walks through a room while talking, then a woman with pink hair and a white shirt appears in the doorway and yells. The first woman walks from left to right, her expression serious; she has light skin and her eyebrows are slightly furrowed. The second woman stands in the doorway, her mouth open in a yell; she has light skin and her eyes are wide. The room is dimly lit, with a bookshelf visible in the background. The camera follows the first woman as she walks, then cuts to a close-up of the second woman's face. The scene is captured in real-life footage.</details> |
# Quick Start Guide
The codebase was tested with Python 3.10.5, CUDA version 12.2, and supports PyTorch >= 2.1.2.
## Installation
```bash
git clone https://github.com/Lightricks/LTX-Video.git
cd LTX-Video
# create env
python -m venv env
source env/bin/activate
python -m pip install -e .\[inference-script\]
```
Then, download the model from [Hugging Face](https://huggingface.co/Lightricks/LTX-Video)
```python
from huggingface_hub import snapshot_download
model_path = 'PATH' # The local directory to save downloaded checkpoint
snapshot_download("Lightricks/LTX-Video", local_dir=model_path, local_dir_use_symlinks=False, repo_type='model')
```
## Inference
To use our model, please follow the inference code in [inference.py](./inference.py):
#### For text-to-video generation:
```bash
python inference.py --ckpt_dir 'PATH' --prompt "PROMPT" --height HEIGHT --width WIDTH --num_frames NUM_FRAMES --seed SEED
```
#### For image-to-video generation:
```bash
python inference.py --ckpt_dir 'PATH' --prompt "PROMPT" --input_image_path IMAGE_PATH --height HEIGHT --width WIDTH --num_frames NUM_FRAMES --seed SEED
```
## ComfyUI Integration
To use our model with ComfyUI, please follow the instructions at [https://github.com/Lightricks/ComfyUI-LTXVideo/](https://github.com/Lightricks/ComfyUI-LTXVideo/).
# Model User Guide
## General tips:
* The model works on resolutions that are divisible by 32 and number of frames that are divisible by 8 + 1 (e.g. 257). In case the resolution or number of frames are not divisible by 32 or 8 + 1, the input will be padded with -1 and then cropped to the desired resolution and number of frames.
* The model works best on resolutions under 720 x 1280 and number of frames below 257.
* Prompts should be in English. The more elaborate the better. Good prompt looks like `The turquoise waves crash against the dark, jagged rocks of the shore, sending white foam spraying into the air. The scene is dominated by the stark contrast between the bright blue water and the dark, almost black rocks. The water is a clear, turquoise color, and the waves are capped with white foam. The rocks are dark and jagged, and they are covered in patches of green moss. The shore is lined with lush green vegetation, including trees and bushes. In the background, there are rolling hills covered in dense forest. The sky is cloudy, and the light is dim.`
## More to come...
# Acknowledgement
We are grateful for the following awesome projects when implementing LTX-Video:
* [DiT](https://github.com/facebookresearch/DiT) and [PixArt-alpha](https://github.com/PixArt-alpha/PixArt-alpha): vision transformers for image generation.
[//]: # (## Citation)

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:b679f14a09d2321b7e34b3ecd23bc01c2cfa75c8d4214a1e59af09826003e2ec
size 7963919

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:336f4baec79c1bd754c7c1bf3ac0792910cc85b6a3bde15fabeb0fb0f33299ff
size 7897781

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ab2cb063b872d487fbbab821de7fe8157e7f87af03bd780d55116cb98fc8fc45
size 4429543

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0a599a641cc3367fab5a6dd75fc89be63208cc708a1173b2ce7bfeac7208f831
size 6713603

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:87fdb9556c1218db4b929994e9b807d1d63f4676defef5b418a4edb1ddaa8422
size 5732587

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f56f3dcc84a871ab4ef1510120f7a4586c7044c5609a897d8177ae8d52eb3eae
size 4239543

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:a08a06681334856db516e969a9ae4290acfd7550f7b970331e87d0223e282bcc
size 7829259

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:3242c65e11a40177c91b48d8ee18084dc4f907ffe5f11217c5f3e5aa2ca3fe36
size 6229734

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:aa1e0a2ba75c6bda530a798e8aaeb3edc19413970b99d2a67b79839cd14f2fe5
size 6389700

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:bcf1e084e936a75eaae73a29f60935c469b1fc34eb3f5ad89483e88b3a2eaffe
size 6193172

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:3e3d04f5763ecb416b3b80c3488e48c49991d80661c94e8f08dddd7b890b1b75
size 5345673

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:39790832fd9bff62c99a799eb4843cf99c9ab73c3f181656acbbd0d4ebf7f471
size 7474091

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:aa7eb790b43f8a55c01d1fbed4c7a7f657fb2ca78a9685833cf9cb558d2002c1
size 9024843

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:4f7afc4b498a927dcc4e1492548db5c32fa76d117e0410d11e1e0b1929153e54
size 7434241

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d897c9656e0cba89512ab9d2cbe2d2c0f2ddf907dcab5f7eadab4b96b1cb1efe
size 6556457

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:c74f35e37bba01817ca4ac01dd9195863100eb83e7cb73bbea2b53e0f69a8628
size 7412915

444
inference.py Normal file
View File

@@ -0,0 +1,444 @@
import argparse
import json
import os
import random
from datetime import datetime
from pathlib import Path
from diffusers.utils import logging
import imageio
import numpy as np
import safetensors.torch
import torch
import torch.nn.functional as F
from PIL import Image
from transformers import T5EncoderModel, T5Tokenizer
from ltx_video.models.autoencoders.causal_video_autoencoder import (
CausalVideoAutoencoder,
)
from ltx_video.models.transformers.symmetric_patchifier import SymmetricPatchifier
from ltx_video.models.transformers.transformer3d import Transformer3DModel
from ltx_video.pipelines.pipeline_ltx_video import LTXVideoPipeline
from ltx_video.schedulers.rf import RectifiedFlowScheduler
from ltx_video.utils.conditioning_method import ConditioningMethod
MAX_HEIGHT = 720
MAX_WIDTH = 1280
MAX_NUM_FRAMES = 257
def load_vae(vae_dir):
vae_ckpt_path = vae_dir / "vae_diffusion_pytorch_model.safetensors"
vae_config_path = vae_dir / "config.json"
with open(vae_config_path, "r") as f:
vae_config = json.load(f)
vae = CausalVideoAutoencoder.from_config(vae_config)
vae_state_dict = safetensors.torch.load_file(vae_ckpt_path)
vae.load_state_dict(vae_state_dict)
if torch.cuda.is_available():
vae = vae.cuda()
return vae.to(torch.bfloat16)
def load_unet(unet_dir):
unet_ckpt_path = unet_dir / "unet_diffusion_pytorch_model.safetensors"
unet_config_path = unet_dir / "config.json"
transformer_config = Transformer3DModel.load_config(unet_config_path)
transformer = Transformer3DModel.from_config(transformer_config)
unet_state_dict = safetensors.torch.load_file(unet_ckpt_path)
transformer.load_state_dict(unet_state_dict, strict=True)
if torch.cuda.is_available():
transformer = transformer.cuda()
return transformer
def load_scheduler(scheduler_dir):
scheduler_config_path = scheduler_dir / "scheduler_config.json"
scheduler_config = RectifiedFlowScheduler.load_config(scheduler_config_path)
return RectifiedFlowScheduler.from_config(scheduler_config)
def load_image_to_tensor_with_resize_and_crop(
image_path, target_height=512, target_width=768
):
image = Image.open(image_path).convert("RGB")
input_width, input_height = image.size
aspect_ratio_target = target_width / target_height
aspect_ratio_frame = input_width / input_height
if aspect_ratio_frame > aspect_ratio_target:
new_width = int(input_height * aspect_ratio_target)
new_height = input_height
x_start = (input_width - new_width) // 2
y_start = 0
else:
new_width = input_width
new_height = int(input_width / aspect_ratio_target)
x_start = 0
y_start = (input_height - new_height) // 2
image = image.crop((x_start, y_start, x_start + new_width, y_start + new_height))
image = image.resize((target_width, target_height))
frame_tensor = torch.tensor(np.array(image)).permute(2, 0, 1).float()
frame_tensor = (frame_tensor / 127.5) - 1.0
# Create 5D tensor: (batch_size=1, channels=3, num_frames=1, height, width)
return frame_tensor.unsqueeze(0).unsqueeze(2)
def calculate_padding(
source_height: int, source_width: int, target_height: int, target_width: int
) -> tuple[int, int, int, int]:
# Calculate total padding needed
pad_height = target_height - source_height
pad_width = target_width - source_width
# Calculate padding for each side
pad_top = pad_height // 2
pad_bottom = pad_height - pad_top # Handles odd padding
pad_left = pad_width // 2
pad_right = pad_width - pad_left # Handles odd padding
# Return padded tensor
# Padding format is (left, right, top, bottom)
padding = (pad_left, pad_right, pad_top, pad_bottom)
return padding
def convert_prompt_to_filename(text: str, max_len: int = 20) -> str:
# Remove non-letters and convert to lowercase
clean_text = "".join(
char.lower() for char in text if char.isalpha() or char.isspace()
)
# Split into words
words = clean_text.split()
# Build result string keeping track of length
result = []
current_length = 0
for word in words:
# Add word length plus 1 for underscore (except for first word)
new_length = current_length + len(word)
if new_length <= max_len:
result.append(word)
current_length += len(word)
else:
break
return "-".join(result)
# Generate output video name
def get_unique_filename(
base: str,
ext: str,
prompt: str,
seed: int,
resolution: tuple[int, int, int],
dir: Path,
endswith=None,
index_range=1000,
) -> Path:
base_filename = f"{base}_{convert_prompt_to_filename(prompt, max_len=30)}_{seed}_{resolution[0]}x{resolution[1]}x{resolution[2]}"
for i in range(index_range):
filename = dir / f"{base_filename}_{i}{endswith if endswith else ''}{ext}"
if not os.path.exists(filename):
return filename
raise FileExistsError(
f"Could not find a unique filename after {index_range} attempts."
)
def seed_everething(seed: int):
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
if torch.cuda.is_available():
torch.cuda.manual_seed(seed)
def main():
parser = argparse.ArgumentParser(
description="Load models from separate directories and run the pipeline."
)
# Directories
parser.add_argument(
"--ckpt_dir",
type=str,
required=True,
help="Path to the directory containing unet, vae, and scheduler subdirectories",
)
parser.add_argument(
"--input_video_path",
type=str,
help="Path to the input video file (first frame used)",
)
parser.add_argument(
"--input_image_path", type=str, help="Path to the input image file"
)
parser.add_argument(
"--output_path",
type=str,
default=None,
help="Path to the folder to save output video, if None will save in outputs/ directory.",
)
parser.add_argument("--seed", type=int, default="171198")
# Pipeline parameters
parser.add_argument(
"--num_inference_steps", type=int, default=40, help="Number of inference steps"
)
parser.add_argument(
"--num_images_per_prompt",
type=int,
default=1,
help="Number of images per prompt",
)
parser.add_argument(
"--guidance_scale",
type=float,
default=3,
help="Guidance scale for the pipeline",
)
parser.add_argument(
"--height",
type=int,
default=480,
help="Height of the output video frames. Optional if an input image provided.",
)
parser.add_argument(
"--width",
type=int,
default=704,
help="Width of the output video frames. If None will infer from input image.",
)
parser.add_argument(
"--num_frames",
type=int,
default=121,
help="Number of frames to generate in the output video",
)
parser.add_argument(
"--frame_rate", type=int, default=25, help="Frame rate for the output video"
)
parser.add_argument(
"--bfloat16",
action="store_true",
help="Denoise in bfloat16",
)
# Prompts
parser.add_argument(
"--prompt",
type=str,
help="Text prompt to guide generation",
)
parser.add_argument(
"--negative_prompt",
type=str,
default="worst quality, inconsistent motion, blurry, jittery, distorted",
help="Negative prompt for undesired features",
)
logger = logging.get_logger(__name__)
args = parser.parse_args()
logger.warning(f"Running generation with arguments: {args}")
seed_everething(args.seed)
output_dir = (
Path(args.output_path)
if args.output_path
else Path(f"outputs/{datetime.today().strftime('%Y-%m-%d')}")
)
output_dir.mkdir(parents=True, exist_ok=True)
# Load image
if args.input_image_path:
media_items_prepad = load_image_to_tensor_with_resize_and_crop(
args.input_image_path, args.height, args.width
)
else:
media_items_prepad = None
height = args.height if args.height else media_items_prepad.shape[-2]
width = args.width if args.width else media_items_prepad.shape[-1]
num_frames = args.num_frames
if height > MAX_HEIGHT or width > MAX_WIDTH or num_frames > MAX_NUM_FRAMES:
logger.warning(
f"Input resolution or number of frames {height}x{width}x{num_frames} is too big, it is suggested to use the resolution below {MAX_HEIGHT}x{MAX_WIDTH}x{MAX_NUM_FRAMES}."
)
# Adjust dimensions to be divisible by 32 and num_frames to be (N * 8 + 1)
height_padded = ((height - 1) // 32 + 1) * 32
width_padded = ((width - 1) // 32 + 1) * 32
num_frames_padded = ((num_frames - 2) // 8 + 1) * 8 + 1
padding = calculate_padding(height, width, height_padded, width_padded)
logger.warning(
f"Padded dimensions: {height_padded}x{width_padded}x{num_frames_padded}"
)
if media_items_prepad is not None:
media_items = F.pad(
media_items_prepad, padding, mode="constant", value=-1
) # -1 is the value for padding since the image is normalized to -1, 1
else:
media_items = None
# Paths for the separate mode directories
ckpt_dir = Path(args.ckpt_dir)
unet_dir = ckpt_dir / "unet"
vae_dir = ckpt_dir / "vae"
scheduler_dir = ckpt_dir / "scheduler"
# Load models
vae = load_vae(vae_dir)
unet = load_unet(unet_dir)
scheduler = load_scheduler(scheduler_dir)
patchifier = SymmetricPatchifier(patch_size=1)
text_encoder = T5EncoderModel.from_pretrained(
"PixArt-alpha/PixArt-XL-2-1024-MS", subfolder="text_encoder"
)
if torch.cuda.is_available():
text_encoder = text_encoder.to("cuda")
tokenizer = T5Tokenizer.from_pretrained(
"PixArt-alpha/PixArt-XL-2-1024-MS", subfolder="tokenizer"
)
if args.bfloat16 and unet.dtype != torch.bfloat16:
unet = unet.to(torch.bfloat16)
# Use submodels for the pipeline
submodel_dict = {
"transformer": unet,
"patchifier": patchifier,
"text_encoder": text_encoder,
"tokenizer": tokenizer,
"scheduler": scheduler,
"vae": vae,
}
pipeline = LTXVideoPipeline(**submodel_dict)
if torch.cuda.is_available():
pipeline = pipeline.to("cuda")
# Prepare input for the pipeline
sample = {
"prompt": args.prompt,
"prompt_attention_mask": None,
"negative_prompt": args.negative_prompt,
"negative_prompt_attention_mask": None,
"media_items": media_items,
}
generator = torch.Generator(
device="cuda" if torch.cuda.is_available() else "cpu"
).manual_seed(args.seed)
images = pipeline(
num_inference_steps=args.num_inference_steps,
num_images_per_prompt=args.num_images_per_prompt,
guidance_scale=args.guidance_scale,
generator=generator,
output_type="pt",
callback_on_step_end=None,
height=height_padded,
width=width_padded,
num_frames=num_frames_padded,
frame_rate=args.frame_rate,
**sample,
is_video=True,
vae_per_channel_normalize=True,
conditioning_method=(
ConditioningMethod.FIRST_FRAME
if media_items is not None
else ConditioningMethod.UNCONDITIONAL
),
mixed_precision=not args.bfloat16,
).images
# Crop the padded images to the desired resolution and number of frames
(pad_left, pad_right, pad_top, pad_bottom) = padding
pad_bottom = -pad_bottom
pad_right = -pad_right
if pad_bottom == 0:
pad_bottom = images.shape[3]
if pad_right == 0:
pad_right = images.shape[4]
images = images[:, :, :num_frames, pad_top:pad_bottom, pad_left:pad_right]
for i in range(images.shape[0]):
# Gathering from B, C, F, H, W to C, F, H, W and then permuting to F, H, W, C
video_np = images[i].permute(1, 2, 3, 0).cpu().float().numpy()
# Unnormalizing images to [0, 255] range
video_np = (video_np * 255).astype(np.uint8)
fps = args.frame_rate
height, width = video_np.shape[1:3]
# In case a single image is generated
if video_np.shape[0] == 1:
output_filename = get_unique_filename(
f"image_output_{i}",
".png",
prompt=args.prompt,
seed=args.seed,
resolution=(height, width, num_frames),
dir=output_dir,
)
imageio.imwrite(output_filename, video_np[0])
else:
if args.input_image_path:
base_filename = f"img_to_vid_{i}"
else:
base_filename = f"text_to_vid_{i}"
output_filename = get_unique_filename(
base_filename,
".mp4",
prompt=args.prompt,
seed=args.seed,
resolution=(height, width, num_frames),
dir=output_dir,
)
# Write video
with imageio.get_writer(output_filename, fps=fps) as video:
for frame in video_np:
video.append_data(frame)
# Write condition image
if args.input_image_path:
reference_image = (
(
media_items_prepad[0, :, 0].permute(1, 2, 0).cpu().data.numpy()
+ 1.0
)
/ 2.0
* 255
)
imageio.imwrite(
get_unique_filename(
base_filename,
".png",
prompt=args.prompt,
seed=args.seed,
resolution=(height, width, num_frames),
dir=output_dir,
endswith="_condition",
),
reference_image.astype(np.uint8),
)
logger.warning(f"Output saved to {output_dir}")
if __name__ == "__main__":
main()

0
ltx_video/__init__.py Normal file
View File

View File

View File

@@ -0,0 +1,62 @@
from typing import Tuple, Union
import torch
import torch.nn as nn
class CausalConv3d(nn.Module):
def __init__(
self,
in_channels,
out_channels,
kernel_size: int = 3,
stride: Union[int, Tuple[int]] = 1,
dilation: int = 1,
groups: int = 1,
**kwargs,
):
super().__init__()
self.in_channels = in_channels
self.out_channels = out_channels
kernel_size = (kernel_size, kernel_size, kernel_size)
self.time_kernel_size = kernel_size[0]
dilation = (dilation, 1, 1)
height_pad = kernel_size[1] // 2
width_pad = kernel_size[2] // 2
padding = (0, height_pad, width_pad)
self.conv = nn.Conv3d(
in_channels,
out_channels,
kernel_size,
stride=stride,
dilation=dilation,
padding=padding,
padding_mode="zeros",
groups=groups,
)
def forward(self, x, causal: bool = True):
if causal:
first_frame_pad = x[:, :, :1, :, :].repeat(
(1, 1, self.time_kernel_size - 1, 1, 1)
)
x = torch.concatenate((first_frame_pad, x), dim=2)
else:
first_frame_pad = x[:, :, :1, :, :].repeat(
(1, 1, (self.time_kernel_size - 1) // 2, 1, 1)
)
last_frame_pad = x[:, :, -1:, :, :].repeat(
(1, 1, (self.time_kernel_size - 1) // 2, 1, 1)
)
x = torch.concatenate((first_frame_pad, x, last_frame_pad), dim=2)
x = self.conv(x)
return x
@property
def weight(self):
return self.conv.weight

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,82 @@
from typing import Tuple, Union
import torch
from ltx_video.models.autoencoders.dual_conv3d import DualConv3d
from ltx_video.models.autoencoders.causal_conv3d import CausalConv3d
def make_conv_nd(
dims: Union[int, Tuple[int, int]],
in_channels: int,
out_channels: int,
kernel_size: int,
stride=1,
padding=0,
dilation=1,
groups=1,
bias=True,
causal=False,
):
if dims == 2:
return torch.nn.Conv2d(
in_channels=in_channels,
out_channels=out_channels,
kernel_size=kernel_size,
stride=stride,
padding=padding,
dilation=dilation,
groups=groups,
bias=bias,
)
elif dims == 3:
if causal:
return CausalConv3d(
in_channels=in_channels,
out_channels=out_channels,
kernel_size=kernel_size,
stride=stride,
padding=padding,
dilation=dilation,
groups=groups,
bias=bias,
)
return torch.nn.Conv3d(
in_channels=in_channels,
out_channels=out_channels,
kernel_size=kernel_size,
stride=stride,
padding=padding,
dilation=dilation,
groups=groups,
bias=bias,
)
elif dims == (2, 1):
return DualConv3d(
in_channels=in_channels,
out_channels=out_channels,
kernel_size=kernel_size,
stride=stride,
padding=padding,
bias=bias,
)
else:
raise ValueError(f"unsupported dimensions: {dims}")
def make_linear_nd(
dims: int,
in_channels: int,
out_channels: int,
bias=True,
):
if dims == 2:
return torch.nn.Conv2d(
in_channels=in_channels, out_channels=out_channels, kernel_size=1, bias=bias
)
elif dims == 3 or dims == (2, 1):
return torch.nn.Conv3d(
in_channels=in_channels, out_channels=out_channels, kernel_size=1, bias=bias
)
else:
raise ValueError(f"unsupported dimensions: {dims}")

View File

@@ -0,0 +1,195 @@
import math
from typing import Tuple, Union
import torch
import torch.nn as nn
import torch.nn.functional as F
from einops import rearrange
class DualConv3d(nn.Module):
def __init__(
self,
in_channels,
out_channels,
kernel_size,
stride: Union[int, Tuple[int, int, int]] = 1,
padding: Union[int, Tuple[int, int, int]] = 0,
dilation: Union[int, Tuple[int, int, int]] = 1,
groups=1,
bias=True,
):
super(DualConv3d, self).__init__()
self.in_channels = in_channels
self.out_channels = out_channels
# Ensure kernel_size, stride, padding, and dilation are tuples of length 3
if isinstance(kernel_size, int):
kernel_size = (kernel_size, kernel_size, kernel_size)
if kernel_size == (1, 1, 1):
raise ValueError(
"kernel_size must be greater than 1. Use make_linear_nd instead."
)
if isinstance(stride, int):
stride = (stride, stride, stride)
if isinstance(padding, int):
padding = (padding, padding, padding)
if isinstance(dilation, int):
dilation = (dilation, dilation, dilation)
# Set parameters for convolutions
self.groups = groups
self.bias = bias
# Define the size of the channels after the first convolution
intermediate_channels = (
out_channels if in_channels < out_channels else in_channels
)
# Define parameters for the first convolution
self.weight1 = nn.Parameter(
torch.Tensor(
intermediate_channels,
in_channels // groups,
1,
kernel_size[1],
kernel_size[2],
)
)
self.stride1 = (1, stride[1], stride[2])
self.padding1 = (0, padding[1], padding[2])
self.dilation1 = (1, dilation[1], dilation[2])
if bias:
self.bias1 = nn.Parameter(torch.Tensor(intermediate_channels))
else:
self.register_parameter("bias1", None)
# Define parameters for the second convolution
self.weight2 = nn.Parameter(
torch.Tensor(
out_channels, intermediate_channels // groups, kernel_size[0], 1, 1
)
)
self.stride2 = (stride[0], 1, 1)
self.padding2 = (padding[0], 0, 0)
self.dilation2 = (dilation[0], 1, 1)
if bias:
self.bias2 = nn.Parameter(torch.Tensor(out_channels))
else:
self.register_parameter("bias2", None)
# Initialize weights and biases
self.reset_parameters()
def reset_parameters(self):
nn.init.kaiming_uniform_(self.weight1, a=math.sqrt(5))
nn.init.kaiming_uniform_(self.weight2, a=math.sqrt(5))
if self.bias:
fan_in1, _ = nn.init._calculate_fan_in_and_fan_out(self.weight1)
bound1 = 1 / math.sqrt(fan_in1)
nn.init.uniform_(self.bias1, -bound1, bound1)
fan_in2, _ = nn.init._calculate_fan_in_and_fan_out(self.weight2)
bound2 = 1 / math.sqrt(fan_in2)
nn.init.uniform_(self.bias2, -bound2, bound2)
def forward(self, x, use_conv3d=False, skip_time_conv=False):
if use_conv3d:
return self.forward_with_3d(x=x, skip_time_conv=skip_time_conv)
else:
return self.forward_with_2d(x=x, skip_time_conv=skip_time_conv)
def forward_with_3d(self, x, skip_time_conv):
# First convolution
x = F.conv3d(
x,
self.weight1,
self.bias1,
self.stride1,
self.padding1,
self.dilation1,
self.groups,
)
if skip_time_conv:
return x
# Second convolution
x = F.conv3d(
x,
self.weight2,
self.bias2,
self.stride2,
self.padding2,
self.dilation2,
self.groups,
)
return x
def forward_with_2d(self, x, skip_time_conv):
b, c, d, h, w = x.shape
# First 2D convolution
x = rearrange(x, "b c d h w -> (b d) c h w")
# Squeeze the depth dimension out of weight1 since it's 1
weight1 = self.weight1.squeeze(2)
# Select stride, padding, and dilation for the 2D convolution
stride1 = (self.stride1[1], self.stride1[2])
padding1 = (self.padding1[1], self.padding1[2])
dilation1 = (self.dilation1[1], self.dilation1[2])
x = F.conv2d(x, weight1, self.bias1, stride1, padding1, dilation1, self.groups)
_, _, h, w = x.shape
if skip_time_conv:
x = rearrange(x, "(b d) c h w -> b c d h w", b=b)
return x
# Second convolution which is essentially treated as a 1D convolution across the 'd' dimension
x = rearrange(x, "(b d) c h w -> (b h w) c d", b=b)
# Reshape weight2 to match the expected dimensions for conv1d
weight2 = self.weight2.squeeze(-1).squeeze(-1)
# Use only the relevant dimension for stride, padding, and dilation for the 1D convolution
stride2 = self.stride2[0]
padding2 = self.padding2[0]
dilation2 = self.dilation2[0]
x = F.conv1d(x, weight2, self.bias2, stride2, padding2, dilation2, self.groups)
x = rearrange(x, "(b h w) c d -> b c d h w", b=b, h=h, w=w)
return x
@property
def weight(self):
return self.weight2
def test_dual_conv3d_consistency():
# Initialize parameters
in_channels = 3
out_channels = 5
kernel_size = (3, 3, 3)
stride = (2, 2, 2)
padding = (1, 1, 1)
# Create an instance of the DualConv3d class
dual_conv3d = DualConv3d(
in_channels=in_channels,
out_channels=out_channels,
kernel_size=kernel_size,
stride=stride,
padding=padding,
bias=True,
)
# Example input tensor
test_input = torch.randn(1, 3, 10, 10, 10)
# Perform forward passes with both 3D and 2D settings
output_conv3d = dual_conv3d(test_input, use_conv3d=True)
output_2d = dual_conv3d(test_input, use_conv3d=False)
# Assert that the outputs from both methods are sufficiently close
assert torch.allclose(
output_conv3d, output_2d, atol=1e-6
), "Outputs are not consistent between 3D and 2D convolutions."

View File

@@ -0,0 +1,12 @@
import torch
from torch import nn
class PixelNorm(nn.Module):
def __init__(self, dim=1, eps=1e-8):
super(PixelNorm, self).__init__()
self.dim = dim
self.eps = eps
def forward(self, x):
return x / torch.sqrt(torch.mean(x**2, dim=self.dim, keepdim=True) + self.eps)

View File

@@ -0,0 +1,343 @@
from typing import Optional, Union
import torch
import inspect
import math
import torch.nn as nn
from diffusers import ConfigMixin, ModelMixin
from diffusers.models.autoencoders.vae import (
DecoderOutput,
DiagonalGaussianDistribution,
)
from diffusers.models.modeling_outputs import AutoencoderKLOutput
from ltx_video.models.autoencoders.conv_nd_factory import make_conv_nd
class AutoencoderKLWrapper(ModelMixin, ConfigMixin):
"""Variational Autoencoder (VAE) model with KL loss.
VAE from the paper Auto-Encoding Variational Bayes by Diederik P. Kingma and Max Welling.
This model is a wrapper around an encoder and a decoder, and it adds a KL loss term to the reconstruction loss.
Args:
encoder (`nn.Module`):
Encoder module.
decoder (`nn.Module`):
Decoder module.
latent_channels (`int`, *optional*, defaults to 4):
Number of latent channels.
"""
def __init__(
self,
encoder: nn.Module,
decoder: nn.Module,
latent_channels: int = 4,
dims: int = 2,
sample_size=512,
use_quant_conv: bool = True,
):
super().__init__()
# pass init params to Encoder
self.encoder = encoder
self.use_quant_conv = use_quant_conv
# pass init params to Decoder
quant_dims = 2 if dims == 2 else 3
self.decoder = decoder
if use_quant_conv:
self.quant_conv = make_conv_nd(
quant_dims, 2 * latent_channels, 2 * latent_channels, 1
)
self.post_quant_conv = make_conv_nd(
quant_dims, latent_channels, latent_channels, 1
)
else:
self.quant_conv = nn.Identity()
self.post_quant_conv = nn.Identity()
self.use_z_tiling = False
self.use_hw_tiling = False
self.dims = dims
self.z_sample_size = 1
self.decoder_params = inspect.signature(self.decoder.forward).parameters
# only relevant if vae tiling is enabled
self.set_tiling_params(sample_size=sample_size, overlap_factor=0.25)
def set_tiling_params(self, sample_size: int = 512, overlap_factor: float = 0.25):
self.tile_sample_min_size = sample_size
num_blocks = len(self.encoder.down_blocks)
self.tile_latent_min_size = int(sample_size / (2 ** (num_blocks - 1)))
self.tile_overlap_factor = overlap_factor
def enable_z_tiling(self, z_sample_size: int = 8):
r"""
Enable tiling during VAE decoding.
When this option is enabled, the VAE will split the input tensor in tiles to compute decoding in several
steps. This is useful to save some memory and allow larger batch sizes.
"""
self.use_z_tiling = z_sample_size > 1
self.z_sample_size = z_sample_size
assert (
z_sample_size % 8 == 0 or z_sample_size == 1
), f"z_sample_size must be a multiple of 8 or 1. Got {z_sample_size}."
def disable_z_tiling(self):
r"""
Disable tiling during VAE decoding. If `use_tiling` was previously invoked, this method will go back to computing
decoding in one step.
"""
self.use_z_tiling = False
def enable_hw_tiling(self):
r"""
Enable tiling during VAE decoding along the height and width dimension.
"""
self.use_hw_tiling = True
def disable_hw_tiling(self):
r"""
Disable tiling during VAE decoding along the height and width dimension.
"""
self.use_hw_tiling = False
def _hw_tiled_encode(self, x: torch.FloatTensor, return_dict: bool = True):
overlap_size = int(self.tile_sample_min_size * (1 - self.tile_overlap_factor))
blend_extent = int(self.tile_latent_min_size * self.tile_overlap_factor)
row_limit = self.tile_latent_min_size - blend_extent
# Split the image into 512x512 tiles and encode them separately.
rows = []
for i in range(0, x.shape[3], overlap_size):
row = []
for j in range(0, x.shape[4], overlap_size):
tile = x[
:,
:,
:,
i : i + self.tile_sample_min_size,
j : j + self.tile_sample_min_size,
]
tile = self.encoder(tile)
tile = self.quant_conv(tile)
row.append(tile)
rows.append(row)
result_rows = []
for i, row in enumerate(rows):
result_row = []
for j, tile in enumerate(row):
# blend the above tile and the left tile
# to the current tile and add the current tile to the result row
if i > 0:
tile = self.blend_v(rows[i - 1][j], tile, blend_extent)
if j > 0:
tile = self.blend_h(row[j - 1], tile, blend_extent)
result_row.append(tile[:, :, :, :row_limit, :row_limit])
result_rows.append(torch.cat(result_row, dim=4))
moments = torch.cat(result_rows, dim=3)
return moments
def blend_z(
self, a: torch.Tensor, b: torch.Tensor, blend_extent: int
) -> torch.Tensor:
blend_extent = min(a.shape[2], b.shape[2], blend_extent)
for z in range(blend_extent):
b[:, :, z, :, :] = a[:, :, -blend_extent + z, :, :] * (
1 - z / blend_extent
) + b[:, :, z, :, :] * (z / blend_extent)
return b
def blend_v(
self, a: torch.Tensor, b: torch.Tensor, blend_extent: int
) -> torch.Tensor:
blend_extent = min(a.shape[3], b.shape[3], blend_extent)
for y in range(blend_extent):
b[:, :, :, y, :] = a[:, :, :, -blend_extent + y, :] * (
1 - y / blend_extent
) + b[:, :, :, y, :] * (y / blend_extent)
return b
def blend_h(
self, a: torch.Tensor, b: torch.Tensor, blend_extent: int
) -> torch.Tensor:
blend_extent = min(a.shape[4], b.shape[4], blend_extent)
for x in range(blend_extent):
b[:, :, :, :, x] = a[:, :, :, :, -blend_extent + x] * (
1 - x / blend_extent
) + b[:, :, :, :, x] * (x / blend_extent)
return b
def _hw_tiled_decode(self, z: torch.FloatTensor, target_shape):
overlap_size = int(self.tile_latent_min_size * (1 - self.tile_overlap_factor))
blend_extent = int(self.tile_sample_min_size * self.tile_overlap_factor)
row_limit = self.tile_sample_min_size - blend_extent
tile_target_shape = (
*target_shape[:3],
self.tile_sample_min_size,
self.tile_sample_min_size,
)
# Split z into overlapping 64x64 tiles and decode them separately.
# The tiles have an overlap to avoid seams between tiles.
rows = []
for i in range(0, z.shape[3], overlap_size):
row = []
for j in range(0, z.shape[4], overlap_size):
tile = z[
:,
:,
:,
i : i + self.tile_latent_min_size,
j : j + self.tile_latent_min_size,
]
tile = self.post_quant_conv(tile)
decoded = self.decoder(tile, target_shape=tile_target_shape)
row.append(decoded)
rows.append(row)
result_rows = []
for i, row in enumerate(rows):
result_row = []
for j, tile in enumerate(row):
# blend the above tile and the left tile
# to the current tile and add the current tile to the result row
if i > 0:
tile = self.blend_v(rows[i - 1][j], tile, blend_extent)
if j > 0:
tile = self.blend_h(row[j - 1], tile, blend_extent)
result_row.append(tile[:, :, :, :row_limit, :row_limit])
result_rows.append(torch.cat(result_row, dim=4))
dec = torch.cat(result_rows, dim=3)
return dec
def encode(
self, z: torch.FloatTensor, return_dict: bool = True
) -> Union[DecoderOutput, torch.FloatTensor]:
if self.use_z_tiling and z.shape[2] > self.z_sample_size > 1:
num_splits = z.shape[2] // self.z_sample_size
sizes = [self.z_sample_size] * num_splits
sizes = (
sizes + [z.shape[2] - sum(sizes)]
if z.shape[2] - sum(sizes) > 0
else sizes
)
tiles = z.split(sizes, dim=2)
moments_tiles = [
(
self._hw_tiled_encode(z_tile, return_dict)
if self.use_hw_tiling
else self._encode(z_tile)
)
for z_tile in tiles
]
moments = torch.cat(moments_tiles, dim=2)
else:
moments = (
self._hw_tiled_encode(z, return_dict)
if self.use_hw_tiling
else self._encode(z)
)
posterior = DiagonalGaussianDistribution(moments)
if not return_dict:
return (posterior,)
return AutoencoderKLOutput(latent_dist=posterior)
def _encode(self, x: torch.FloatTensor) -> AutoencoderKLOutput:
h = self.encoder(x)
moments = self.quant_conv(h)
return moments
def _decode(
self,
z: torch.FloatTensor,
target_shape=None,
timesteps: Optional[torch.Tensor] = None,
) -> Union[DecoderOutput, torch.FloatTensor]:
z = self.post_quant_conv(z)
if "timesteps" in self.decoder_params:
dec = self.decoder(z, target_shape=target_shape, timesteps=timesteps)
else:
dec = self.decoder(z, target_shape=target_shape)
return dec
def decode(
self,
z: torch.FloatTensor,
return_dict: bool = True,
target_shape=None,
timesteps: Optional[torch.Tensor] = None,
) -> Union[DecoderOutput, torch.FloatTensor]:
assert target_shape is not None, "target_shape must be provided for decoding"
if self.use_z_tiling and z.shape[2] > self.z_sample_size > 1:
reduction_factor = int(
self.encoder.patch_size_t
* 2
** (
len(self.encoder.down_blocks)
- 1
- math.sqrt(self.encoder.patch_size)
)
)
split_size = self.z_sample_size // reduction_factor
num_splits = z.shape[2] // split_size
# copy target shape, and divide frame dimension (=2) by the context size
target_shape_split = list(target_shape)
target_shape_split[2] = target_shape[2] // num_splits
decoded_tiles = [
(
self._hw_tiled_decode(z_tile, target_shape_split)
if self.use_hw_tiling
else self._decode(z_tile, target_shape=target_shape_split)
)
for z_tile in torch.tensor_split(z, num_splits, dim=2)
]
decoded = torch.cat(decoded_tiles, dim=2)
else:
decoded = (
self._hw_tiled_decode(z, target_shape)
if self.use_hw_tiling
else self._decode(z, target_shape=target_shape, timesteps=timesteps)
)
if not return_dict:
return (decoded,)
return DecoderOutput(sample=decoded)
def forward(
self,
sample: torch.FloatTensor,
sample_posterior: bool = False,
return_dict: bool = True,
generator: Optional[torch.Generator] = None,
) -> Union[DecoderOutput, torch.FloatTensor]:
r"""
Args:
sample (`torch.FloatTensor`): Input sample.
sample_posterior (`bool`, *optional*, defaults to `False`):
Whether to sample from the posterior.
return_dict (`bool`, *optional*, defaults to `True`):
Whether to return a [`DecoderOutput`] instead of a plain tuple.
generator (`torch.Generator`, *optional*):
Generator used to sample from the posterior.
"""
x = sample
posterior = self.encode(x).latent_dist
if sample_posterior:
z = posterior.sample(generator=generator)
else:
z = posterior.mode()
dec = self.decode(z, target_shape=sample.shape).sample
if not return_dict:
return (dec,)
return DecoderOutput(sample=dec)

View File

@@ -0,0 +1,195 @@
import torch
from diffusers import AutoencoderKL
from einops import rearrange
from torch import Tensor
from ltx_video.models.autoencoders.causal_video_autoencoder import (
CausalVideoAutoencoder,
)
from ltx_video.models.autoencoders.video_autoencoder import (
Downsample3D,
VideoAutoencoder,
)
try:
import torch_xla.core.xla_model as xm
except ImportError:
xm = None
def vae_encode(
media_items: Tensor,
vae: AutoencoderKL,
split_size: int = 1,
vae_per_channel_normalize=False,
) -> Tensor:
"""
Encodes media items (images or videos) into latent representations using a specified VAE model.
The function supports processing batches of images or video frames and can handle the processing
in smaller sub-batches if needed.
Args:
media_items (Tensor): A torch Tensor containing the media items to encode. The expected
shape is (batch_size, channels, height, width) for images or (batch_size, channels,
frames, height, width) for videos.
vae (AutoencoderKL): An instance of the `AutoencoderKL` class from the `diffusers` library,
pre-configured and loaded with the appropriate model weights.
split_size (int, optional): The number of sub-batches to split the input batch into for encoding.
If set to more than 1, the input media items are processed in smaller batches according to
this value. Defaults to 1, which processes all items in a single batch.
Returns:
Tensor: A torch Tensor of the encoded latent representations. The shape of the tensor is adjusted
to match the input shape, scaled by the model's configuration.
Examples:
>>> import torch
>>> from diffusers import AutoencoderKL
>>> vae = AutoencoderKL.from_pretrained('your-model-name')
>>> images = torch.rand(10, 3, 8 256, 256) # Example tensor with 10 videos of 8 frames.
>>> latents = vae_encode(images, vae)
>>> print(latents.shape) # Output shape will depend on the model's latent configuration.
Note:
In case of a video, the function encodes the media item frame-by frame.
"""
is_video_shaped = media_items.dim() == 5
batch_size, channels = media_items.shape[0:2]
if channels != 3:
raise ValueError(f"Expects tensors with 3 channels, got {channels}.")
if is_video_shaped and not isinstance(
vae, (VideoAutoencoder, CausalVideoAutoencoder)
):
media_items = rearrange(media_items, "b c n h w -> (b n) c h w")
if split_size > 1:
if len(media_items) % split_size != 0:
raise ValueError(
"Error: The batch size must be divisible by 'train.vae_bs_split"
)
encode_bs = len(media_items) // split_size
# latents = [vae.encode(image_batch).latent_dist.sample() for image_batch in media_items.split(encode_bs)]
latents = []
if media_items.device.type == "xla":
xm.mark_step()
for image_batch in media_items.split(encode_bs):
latents.append(vae.encode(image_batch).latent_dist.sample())
if media_items.device.type == "xla":
xm.mark_step()
latents = torch.cat(latents, dim=0)
else:
latents = vae.encode(media_items).latent_dist.sample()
latents = normalize_latents(latents, vae, vae_per_channel_normalize)
if is_video_shaped and not isinstance(
vae, (VideoAutoencoder, CausalVideoAutoencoder)
):
latents = rearrange(latents, "(b n) c h w -> b c n h w", b=batch_size)
return latents
def vae_decode(
latents: Tensor,
vae: AutoencoderKL,
is_video: bool = True,
split_size: int = 1,
vae_per_channel_normalize=False,
) -> Tensor:
is_video_shaped = latents.dim() == 5
batch_size = latents.shape[0]
if is_video_shaped and not isinstance(
vae, (VideoAutoencoder, CausalVideoAutoencoder)
):
latents = rearrange(latents, "b c n h w -> (b n) c h w")
if split_size > 1:
if len(latents) % split_size != 0:
raise ValueError(
"Error: The batch size must be divisible by 'train.vae_bs_split"
)
encode_bs = len(latents) // split_size
image_batch = [
_run_decoder(latent_batch, vae, is_video, vae_per_channel_normalize)
for latent_batch in latents.split(encode_bs)
]
images = torch.cat(image_batch, dim=0)
else:
images = _run_decoder(latents, vae, is_video, vae_per_channel_normalize)
if is_video_shaped and not isinstance(
vae, (VideoAutoencoder, CausalVideoAutoencoder)
):
images = rearrange(images, "(b n) c h w -> b c n h w", b=batch_size)
return images
def _run_decoder(
latents: Tensor, vae: AutoencoderKL, is_video: bool, vae_per_channel_normalize=False
) -> Tensor:
if isinstance(vae, (VideoAutoencoder, CausalVideoAutoencoder)):
*_, fl, hl, wl = latents.shape
temporal_scale, spatial_scale, _ = get_vae_size_scale_factor(vae)
latents = latents.to(vae.dtype)
image = vae.decode(
un_normalize_latents(latents, vae, vae_per_channel_normalize),
return_dict=False,
target_shape=(
1,
3,
fl * temporal_scale if is_video else 1,
hl * spatial_scale,
wl * spatial_scale,
),
)[0]
else:
image = vae.decode(
un_normalize_latents(latents, vae, vae_per_channel_normalize),
return_dict=False,
)[0]
return image
def get_vae_size_scale_factor(vae: AutoencoderKL) -> float:
if isinstance(vae, CausalVideoAutoencoder):
spatial = vae.spatial_downscale_factor
temporal = vae.temporal_downscale_factor
else:
down_blocks = len(
[
block
for block in vae.encoder.down_blocks
if isinstance(block.downsample, Downsample3D)
]
)
spatial = vae.config.patch_size * 2**down_blocks
temporal = (
vae.config.patch_size_t * 2**down_blocks
if isinstance(vae, VideoAutoencoder)
else 1
)
return (temporal, spatial, spatial)
def normalize_latents(
latents: Tensor, vae: AutoencoderKL, vae_per_channel_normalize: bool = False
) -> Tensor:
return (
(latents - vae.mean_of_means.to(latents.dtype).view(1, -1, 1, 1, 1))
/ vae.std_of_means.to(latents.dtype).view(1, -1, 1, 1, 1)
if vae_per_channel_normalize
else latents * vae.config.scaling_factor
)
def un_normalize_latents(
latents: Tensor, vae: AutoencoderKL, vae_per_channel_normalize: bool = False
) -> Tensor:
return (
latents * vae.std_of_means.to(latents.dtype).view(1, -1, 1, 1, 1)
+ vae.mean_of_means.to(latents.dtype).view(1, -1, 1, 1, 1)
if vae_per_channel_normalize
else latents / vae.config.scaling_factor
)

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,129 @@
# Adapted from: https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/embeddings.py
import math
import numpy as np
import torch
from einops import rearrange
from torch import nn
def get_timestep_embedding(
timesteps: torch.Tensor,
embedding_dim: int,
flip_sin_to_cos: bool = False,
downscale_freq_shift: float = 1,
scale: float = 1,
max_period: int = 10000,
):
"""
This matches the implementation in Denoising Diffusion Probabilistic Models: Create sinusoidal timestep embeddings.
:param timesteps: a 1-D Tensor of N indices, one per batch element.
These may be fractional.
:param embedding_dim: the dimension of the output. :param max_period: controls the minimum frequency of the
embeddings. :return: an [N x dim] Tensor of positional embeddings.
"""
assert len(timesteps.shape) == 1, "Timesteps should be a 1d-array"
half_dim = embedding_dim // 2
exponent = -math.log(max_period) * torch.arange(
start=0, end=half_dim, dtype=torch.float32, device=timesteps.device
)
exponent = exponent / (half_dim - downscale_freq_shift)
emb = torch.exp(exponent)
emb = timesteps[:, None].float() * emb[None, :]
# scale embeddings
emb = scale * emb
# concat sine and cosine embeddings
emb = torch.cat([torch.sin(emb), torch.cos(emb)], dim=-1)
# flip sine and cosine embeddings
if flip_sin_to_cos:
emb = torch.cat([emb[:, half_dim:], emb[:, :half_dim]], dim=-1)
# zero pad
if embedding_dim % 2 == 1:
emb = torch.nn.functional.pad(emb, (0, 1, 0, 0))
return emb
def get_3d_sincos_pos_embed(embed_dim, grid, w, h, f):
"""
grid_size: int of the grid height and width return: pos_embed: [grid_size*grid_size, embed_dim] or
[1+grid_size*grid_size, embed_dim] (w/ or w/o cls_token)
"""
grid = rearrange(grid, "c (f h w) -> c f h w", h=h, w=w)
grid = rearrange(grid, "c f h w -> c h w f", h=h, w=w)
grid = grid.reshape([3, 1, w, h, f])
pos_embed = get_3d_sincos_pos_embed_from_grid(embed_dim, grid)
pos_embed = pos_embed.transpose(1, 0, 2, 3)
return rearrange(pos_embed, "h w f c -> (f h w) c")
def get_3d_sincos_pos_embed_from_grid(embed_dim, grid):
if embed_dim % 3 != 0:
raise ValueError("embed_dim must be divisible by 3")
# use half of dimensions to encode grid_h
emb_f = get_1d_sincos_pos_embed_from_grid(embed_dim // 3, grid[0]) # (H*W*T, D/3)
emb_h = get_1d_sincos_pos_embed_from_grid(embed_dim // 3, grid[1]) # (H*W*T, D/3)
emb_w = get_1d_sincos_pos_embed_from_grid(embed_dim // 3, grid[2]) # (H*W*T, D/3)
emb = np.concatenate([emb_h, emb_w, emb_f], axis=-1) # (H*W*T, D)
return emb
def get_1d_sincos_pos_embed_from_grid(embed_dim, pos):
"""
embed_dim: output dimension for each position pos: a list of positions to be encoded: size (M,) out: (M, D)
"""
if embed_dim % 2 != 0:
raise ValueError("embed_dim must be divisible by 2")
omega = np.arange(embed_dim // 2, dtype=np.float64)
omega /= embed_dim / 2.0
omega = 1.0 / 10000**omega # (D/2,)
pos_shape = pos.shape
pos = pos.reshape(-1)
out = np.einsum("m,d->md", pos, omega) # (M, D/2), outer product
out = out.reshape([*pos_shape, -1])[0]
emb_sin = np.sin(out) # (M, D/2)
emb_cos = np.cos(out) # (M, D/2)
emb = np.concatenate([emb_sin, emb_cos], axis=-1) # (M, D)
return emb
class SinusoidalPositionalEmbedding(nn.Module):
"""Apply positional information to a sequence of embeddings.
Takes in a sequence of embeddings with shape (batch_size, seq_length, embed_dim) and adds positional embeddings to
them
Args:
embed_dim: (int): Dimension of the positional embedding.
max_seq_length: Maximum sequence length to apply positional embeddings
"""
def __init__(self, embed_dim: int, max_seq_length: int = 32):
super().__init__()
position = torch.arange(max_seq_length).unsqueeze(1)
div_term = torch.exp(
torch.arange(0, embed_dim, 2) * (-math.log(10000.0) / embed_dim)
)
pe = torch.zeros(1, max_seq_length, embed_dim)
pe[0, :, 0::2] = torch.sin(position * div_term)
pe[0, :, 1::2] = torch.cos(position * div_term)
self.register_buffer("pe", pe)
def forward(self, x):
_, seq_length, _ = x.shape
x = x + self.pe[:, :seq_length]
return x

View File

@@ -0,0 +1,96 @@
from abc import ABC, abstractmethod
from typing import Tuple
import torch
from diffusers.configuration_utils import ConfigMixin
from einops import rearrange
from torch import Tensor
from ltx_video.utils.torch_utils import append_dims
class Patchifier(ConfigMixin, ABC):
def __init__(self, patch_size: int):
super().__init__()
self._patch_size = (1, patch_size, patch_size)
@abstractmethod
def patchify(
self, latents: Tensor, frame_rates: Tensor, scale_grid: bool
) -> Tuple[Tensor, Tensor]:
pass
@abstractmethod
def unpatchify(
self,
latents: Tensor,
output_height: int,
output_width: int,
output_num_frames: int,
out_channels: int,
) -> Tuple[Tensor, Tensor]:
pass
@property
def patch_size(self):
return self._patch_size
def get_grid(
self, orig_num_frames, orig_height, orig_width, batch_size, scale_grid, device
):
f = orig_num_frames // self._patch_size[0]
h = orig_height // self._patch_size[1]
w = orig_width // self._patch_size[2]
grid_h = torch.arange(h, dtype=torch.float32, device=device)
grid_w = torch.arange(w, dtype=torch.float32, device=device)
grid_f = torch.arange(f, dtype=torch.float32, device=device)
grid = torch.meshgrid(grid_f, grid_h, grid_w)
grid = torch.stack(grid, dim=0)
grid = grid.unsqueeze(0).repeat(batch_size, 1, 1, 1, 1)
if scale_grid is not None:
for i in range(3):
if isinstance(scale_grid[i], Tensor):
scale = append_dims(scale_grid[i], grid.ndim - 1)
else:
scale = scale_grid[i]
grid[:, i, ...] = grid[:, i, ...] * scale * self._patch_size[i]
grid = rearrange(grid, "b c f h w -> b c (f h w)", b=batch_size)
return grid
class SymmetricPatchifier(Patchifier):
def patchify(
self,
latents: Tensor,
) -> Tuple[Tensor, Tensor]:
latents = rearrange(
latents,
"b c (f p1) (h p2) (w p3) -> b (f h w) (c p1 p2 p3)",
p1=self._patch_size[0],
p2=self._patch_size[1],
p3=self._patch_size[2],
)
return latents
def unpatchify(
self,
latents: Tensor,
output_height: int,
output_width: int,
output_num_frames: int,
out_channels: int,
) -> Tuple[Tensor, Tensor]:
output_height = output_height // self._patch_size[1]
output_width = output_width // self._patch_size[2]
latents = rearrange(
latents,
"b (f h w) (c p q) -> b c f (h p) (w q) ",
f=output_num_frames,
h=output_height,
w=output_width,
p=self._patch_size[1],
q=self._patch_size[2],
)
return latents

View File

@@ -0,0 +1,491 @@
# Adapted from: https://github.com/huggingface/diffusers/blob/v0.26.3/src/diffusers/models/transformers/transformer_2d.py
import math
from dataclasses import dataclass
from typing import Any, Dict, List, Optional, Literal
import torch
from diffusers.configuration_utils import ConfigMixin, register_to_config
from diffusers.models.embeddings import PixArtAlphaTextProjection
from diffusers.models.modeling_utils import ModelMixin
from diffusers.models.normalization import AdaLayerNormSingle
from diffusers.utils import BaseOutput, is_torch_version
from diffusers.utils import logging
from torch import nn
from ltx_video.models.transformers.attention import BasicTransformerBlock
from ltx_video.models.transformers.embeddings import get_3d_sincos_pos_embed
logger = logging.get_logger(__name__)
@dataclass
class Transformer3DModelOutput(BaseOutput):
"""
The output of [`Transformer2DModel`].
Args:
sample (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)` or `(batch size, num_vector_embeds - 1, num_latent_pixels)` if [`Transformer2DModel`] is discrete):
The hidden states output conditioned on the `encoder_hidden_states` input. If discrete, returns probability
distributions for the unnoised latent pixels.
"""
sample: torch.FloatTensor
class Transformer3DModel(ModelMixin, ConfigMixin):
_supports_gradient_checkpointing = True
@register_to_config
def __init__(
self,
num_attention_heads: int = 16,
attention_head_dim: int = 88,
in_channels: Optional[int] = None,
out_channels: Optional[int] = None,
num_layers: int = 1,
dropout: float = 0.0,
norm_num_groups: int = 32,
cross_attention_dim: Optional[int] = None,
attention_bias: bool = False,
num_vector_embeds: Optional[int] = None,
activation_fn: str = "geglu",
num_embeds_ada_norm: Optional[int] = None,
use_linear_projection: bool = False,
only_cross_attention: bool = False,
double_self_attention: bool = False,
upcast_attention: bool = False,
adaptive_norm: str = "single_scale_shift", # 'single_scale_shift' or 'single_scale'
standardization_norm: str = "layer_norm", # 'layer_norm' or 'rms_norm'
norm_elementwise_affine: bool = True,
norm_eps: float = 1e-5,
attention_type: str = "default",
caption_channels: int = None,
project_to_2d_pos: bool = False,
use_tpu_flash_attention: bool = False, # if True uses the TPU attention offload ('flash attention')
qk_norm: Optional[str] = None,
positional_embedding_type: str = "absolute",
positional_embedding_theta: Optional[float] = None,
positional_embedding_max_pos: Optional[List[int]] = None,
timestep_scale_multiplier: Optional[float] = None,
):
super().__init__()
self.use_tpu_flash_attention = (
use_tpu_flash_attention # FIXME: push config down to the attention modules
)
self.use_linear_projection = use_linear_projection
self.num_attention_heads = num_attention_heads
self.attention_head_dim = attention_head_dim
inner_dim = num_attention_heads * attention_head_dim
self.inner_dim = inner_dim
self.project_to_2d_pos = project_to_2d_pos
self.patchify_proj = nn.Linear(in_channels, inner_dim, bias=True)
self.positional_embedding_type = positional_embedding_type
self.positional_embedding_theta = positional_embedding_theta
self.positional_embedding_max_pos = positional_embedding_max_pos
self.use_rope = self.positional_embedding_type == "rope"
self.timestep_scale_multiplier = timestep_scale_multiplier
if self.positional_embedding_type == "absolute":
embed_dim_3d = (
math.ceil((inner_dim / 2) * 3) if project_to_2d_pos else inner_dim
)
if self.project_to_2d_pos:
self.to_2d_proj = torch.nn.Linear(embed_dim_3d, inner_dim, bias=False)
self._init_to_2d_proj_weights(self.to_2d_proj)
elif self.positional_embedding_type == "rope":
if positional_embedding_theta is None:
raise ValueError(
"If `positional_embedding_type` type is rope, `positional_embedding_theta` must also be defined"
)
if positional_embedding_max_pos is None:
raise ValueError(
"If `positional_embedding_type` type is rope, `positional_embedding_max_pos` must also be defined"
)
# 3. Define transformers blocks
self.transformer_blocks = nn.ModuleList(
[
BasicTransformerBlock(
inner_dim,
num_attention_heads,
attention_head_dim,
dropout=dropout,
cross_attention_dim=cross_attention_dim,
activation_fn=activation_fn,
num_embeds_ada_norm=num_embeds_ada_norm,
attention_bias=attention_bias,
only_cross_attention=only_cross_attention,
double_self_attention=double_self_attention,
upcast_attention=upcast_attention,
adaptive_norm=adaptive_norm,
standardization_norm=standardization_norm,
norm_elementwise_affine=norm_elementwise_affine,
norm_eps=norm_eps,
attention_type=attention_type,
use_tpu_flash_attention=use_tpu_flash_attention,
qk_norm=qk_norm,
use_rope=self.use_rope,
)
for d in range(num_layers)
]
)
# 4. Define output layers
self.out_channels = in_channels if out_channels is None else out_channels
self.norm_out = nn.LayerNorm(inner_dim, elementwise_affine=False, eps=1e-6)
self.scale_shift_table = nn.Parameter(
torch.randn(2, inner_dim) / inner_dim**0.5
)
self.proj_out = nn.Linear(inner_dim, self.out_channels)
self.adaln_single = AdaLayerNormSingle(
inner_dim, use_additional_conditions=False
)
if adaptive_norm == "single_scale":
self.adaln_single.linear = nn.Linear(inner_dim, 4 * inner_dim, bias=True)
self.caption_projection = None
if caption_channels is not None:
self.caption_projection = PixArtAlphaTextProjection(
in_features=caption_channels, hidden_size=inner_dim
)
self.gradient_checkpointing = False
def set_use_tpu_flash_attention(self):
r"""
Function sets the flag in this object and propagates down the children. The flag will enforce the usage of TPU
attention kernel.
"""
logger.info("ENABLE TPU FLASH ATTENTION -> TRUE")
self.use_tpu_flash_attention = True
# push config down to the attention modules
for block in self.transformer_blocks:
block.set_use_tpu_flash_attention()
def initialize(self, embedding_std: float, mode: Literal["ltx_video", "legacy"]):
def _basic_init(module):
if isinstance(module, nn.Linear):
torch.nn.init.xavier_uniform_(module.weight)
if module.bias is not None:
nn.init.constant_(module.bias, 0)
self.apply(_basic_init)
# Initialize timestep embedding MLP:
nn.init.normal_(
self.adaln_single.emb.timestep_embedder.linear_1.weight, std=embedding_std
)
nn.init.normal_(
self.adaln_single.emb.timestep_embedder.linear_2.weight, std=embedding_std
)
nn.init.normal_(self.adaln_single.linear.weight, std=embedding_std)
if hasattr(self.adaln_single.emb, "resolution_embedder"):
nn.init.normal_(
self.adaln_single.emb.resolution_embedder.linear_1.weight,
std=embedding_std,
)
nn.init.normal_(
self.adaln_single.emb.resolution_embedder.linear_2.weight,
std=embedding_std,
)
if hasattr(self.adaln_single.emb, "aspect_ratio_embedder"):
nn.init.normal_(
self.adaln_single.emb.aspect_ratio_embedder.linear_1.weight,
std=embedding_std,
)
nn.init.normal_(
self.adaln_single.emb.aspect_ratio_embedder.linear_2.weight,
std=embedding_std,
)
# Initialize caption embedding MLP:
nn.init.normal_(self.caption_projection.linear_1.weight, std=embedding_std)
nn.init.normal_(self.caption_projection.linear_1.weight, std=embedding_std)
for block in self.transformer_blocks:
if mode.lower() == "ltx_video":
nn.init.constant_(block.attn1.to_out[0].weight, 0)
nn.init.constant_(block.attn1.to_out[0].bias, 0)
nn.init.constant_(block.attn2.to_out[0].weight, 0)
nn.init.constant_(block.attn2.to_out[0].bias, 0)
if mode.lower() == "ltx_video":
nn.init.constant_(block.ff.net[2].weight, 0)
nn.init.constant_(block.ff.net[2].bias, 0)
# Zero-out output layers:
nn.init.constant_(self.proj_out.weight, 0)
nn.init.constant_(self.proj_out.bias, 0)
def _set_gradient_checkpointing(self, module, value=False):
if hasattr(module, "gradient_checkpointing"):
module.gradient_checkpointing = value
@staticmethod
def _init_to_2d_proj_weights(linear_layer):
input_features = linear_layer.weight.data.size(1)
output_features = linear_layer.weight.data.size(0)
# Start with a zero matrix
identity_like = torch.zeros((output_features, input_features))
# Fill the diagonal with 1's as much as possible
min_features = min(output_features, input_features)
identity_like[:min_features, :min_features] = torch.eye(min_features)
linear_layer.weight.data = identity_like.to(linear_layer.weight.data.device)
def get_fractional_positions(self, indices_grid):
fractional_positions = torch.stack(
[
indices_grid[:, i] / self.positional_embedding_max_pos[i]
for i in range(3)
],
dim=-1,
)
return fractional_positions
def precompute_freqs_cis(self, indices_grid, spacing="exp"):
dtype = torch.float32 # We need full precision in the freqs_cis computation.
dim = self.inner_dim
theta = self.positional_embedding_theta
fractional_positions = self.get_fractional_positions(indices_grid)
start = 1
end = theta
device = fractional_positions.device
if spacing == "exp":
indices = theta ** (
torch.linspace(
math.log(start, theta),
math.log(end, theta),
dim // 6,
device=device,
dtype=dtype,
)
)
indices = indices.to(dtype=dtype)
elif spacing == "exp_2":
indices = 1.0 / theta ** (torch.arange(0, dim, 6, device=device) / dim)
indices = indices.to(dtype=dtype)
elif spacing == "linear":
indices = torch.linspace(start, end, dim // 6, device=device, dtype=dtype)
elif spacing == "sqrt":
indices = torch.linspace(
start**2, end**2, dim // 6, device=device, dtype=dtype
).sqrt()
indices = indices * math.pi / 2
if spacing == "exp_2":
freqs = (
(indices * fractional_positions.unsqueeze(-1))
.transpose(-1, -2)
.flatten(2)
)
else:
freqs = (
(indices * (fractional_positions.unsqueeze(-1) * 2 - 1))
.transpose(-1, -2)
.flatten(2)
)
cos_freq = freqs.cos().repeat_interleave(2, dim=-1)
sin_freq = freqs.sin().repeat_interleave(2, dim=-1)
if dim % 6 != 0:
cos_padding = torch.ones_like(cos_freq[:, :, : dim % 6])
sin_padding = torch.zeros_like(cos_freq[:, :, : dim % 6])
cos_freq = torch.cat([cos_padding, cos_freq], dim=-1)
sin_freq = torch.cat([sin_padding, sin_freq], dim=-1)
return cos_freq.to(self.dtype), sin_freq.to(self.dtype)
def forward(
self,
hidden_states: torch.Tensor,
indices_grid: torch.Tensor,
encoder_hidden_states: Optional[torch.Tensor] = None,
timestep: Optional[torch.LongTensor] = None,
class_labels: Optional[torch.LongTensor] = None,
cross_attention_kwargs: Dict[str, Any] = None,
attention_mask: Optional[torch.Tensor] = None,
encoder_attention_mask: Optional[torch.Tensor] = None,
return_dict: bool = True,
):
"""
The [`Transformer2DModel`] forward method.
Args:
hidden_states (`torch.LongTensor` of shape `(batch size, num latent pixels)` if discrete, `torch.FloatTensor` of shape `(batch size, channel, height, width)` if continuous):
Input `hidden_states`.
indices_grid (`torch.LongTensor` of shape `(batch size, 3, num latent pixels)`):
encoder_hidden_states ( `torch.FloatTensor` of shape `(batch size, sequence len, embed dims)`, *optional*):
Conditional embeddings for cross attention layer. If not given, cross-attention defaults to
self-attention.
timestep ( `torch.LongTensor`, *optional*):
Used to indicate denoising step. Optional timestep to be applied as an embedding in `AdaLayerNorm`.
class_labels ( `torch.LongTensor` of shape `(batch size, num classes)`, *optional*):
Used to indicate class labels conditioning. Optional class labels to be applied as an embedding in
`AdaLayerZeroNorm`.
cross_attention_kwargs ( `Dict[str, Any]`, *optional*):
A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
`self.processor` in
[diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
attention_mask ( `torch.Tensor`, *optional*):
An attention mask of shape `(batch, key_tokens)` is applied to `encoder_hidden_states`. If `1` the mask
is kept, otherwise if `0` it is discarded. Mask will be converted into a bias, which adds large
negative values to the attention scores corresponding to "discard" tokens.
encoder_attention_mask ( `torch.Tensor`, *optional*):
Cross-attention mask applied to `encoder_hidden_states`. Two formats supported:
* Mask `(batch, sequence_length)` True = keep, False = discard.
* Bias `(batch, 1, sequence_length)` 0 = keep, -10000 = discard.
If `ndim == 2`: will be interpreted as a mask, then converted into a bias consistent with the format
above. This bias will be added to the cross-attention scores.
return_dict (`bool`, *optional*, defaults to `True`):
Whether or not to return a [`~models.unets.unet_2d_condition.UNet2DConditionOutput`] instead of a plain
tuple.
Returns:
If `return_dict` is True, an [`~models.transformer_2d.Transformer2DModelOutput`] is returned, otherwise a
`tuple` where the first element is the sample tensor.
"""
# for tpu attention offload 2d token masks are used. No need to transform.
if not self.use_tpu_flash_attention:
# ensure attention_mask is a bias, and give it a singleton query_tokens dimension.
# we may have done this conversion already, e.g. if we came here via UNet2DConditionModel#forward.
# we can tell by counting dims; if ndim == 2: it's a mask rather than a bias.
# expects mask of shape:
# [batch, key_tokens]
# adds singleton query_tokens dimension:
# [batch, 1, key_tokens]
# this helps to broadcast it as a bias over attention scores, which will be in one of the following shapes:
# [batch, heads, query_tokens, key_tokens] (e.g. torch sdp attn)
# [batch * heads, query_tokens, key_tokens] (e.g. xformers or classic attn)
if attention_mask is not None and attention_mask.ndim == 2:
# assume that mask is expressed as:
# (1 = keep, 0 = discard)
# convert mask into a bias that can be added to attention scores:
# (keep = +0, discard = -10000.0)
attention_mask = (1 - attention_mask.to(hidden_states.dtype)) * -10000.0
attention_mask = attention_mask.unsqueeze(1)
# convert encoder_attention_mask to a bias the same way we do for attention_mask
if encoder_attention_mask is not None and encoder_attention_mask.ndim == 2:
encoder_attention_mask = (
1 - encoder_attention_mask.to(hidden_states.dtype)
) * -10000.0
encoder_attention_mask = encoder_attention_mask.unsqueeze(1)
# 1. Input
hidden_states = self.patchify_proj(hidden_states)
if self.timestep_scale_multiplier:
timestep = self.timestep_scale_multiplier * timestep
if self.positional_embedding_type == "absolute":
pos_embed_3d = self.get_absolute_pos_embed(indices_grid).to(
hidden_states.device
)
if self.project_to_2d_pos:
pos_embed = self.to_2d_proj(pos_embed_3d)
hidden_states = (hidden_states + pos_embed).to(hidden_states.dtype)
freqs_cis = None
elif self.positional_embedding_type == "rope":
freqs_cis = self.precompute_freqs_cis(indices_grid)
batch_size = hidden_states.shape[0]
timestep, embedded_timestep = self.adaln_single(
timestep.flatten(),
{"resolution": None, "aspect_ratio": None},
batch_size=batch_size,
hidden_dtype=hidden_states.dtype,
)
# Second dimension is 1 or number of tokens (if timestep_per_token)
timestep = timestep.view(batch_size, -1, timestep.shape[-1])
embedded_timestep = embedded_timestep.view(
batch_size, -1, embedded_timestep.shape[-1]
)
# 2. Blocks
if self.caption_projection is not None:
batch_size = hidden_states.shape[0]
encoder_hidden_states = self.caption_projection(encoder_hidden_states)
encoder_hidden_states = encoder_hidden_states.view(
batch_size, -1, hidden_states.shape[-1]
)
for block in self.transformer_blocks:
if self.training and self.gradient_checkpointing:
def create_custom_forward(module, return_dict=None):
def custom_forward(*inputs):
if return_dict is not None:
return module(*inputs, return_dict=return_dict)
else:
return module(*inputs)
return custom_forward
ckpt_kwargs: Dict[str, Any] = (
{"use_reentrant": False} if is_torch_version(">=", "1.11.0") else {}
)
hidden_states = torch.utils.checkpoint.checkpoint(
create_custom_forward(block),
hidden_states,
freqs_cis,
attention_mask,
encoder_hidden_states,
encoder_attention_mask,
timestep,
cross_attention_kwargs,
class_labels,
**ckpt_kwargs,
)
else:
hidden_states = block(
hidden_states,
freqs_cis=freqs_cis,
attention_mask=attention_mask,
encoder_hidden_states=encoder_hidden_states,
encoder_attention_mask=encoder_attention_mask,
timestep=timestep,
cross_attention_kwargs=cross_attention_kwargs,
class_labels=class_labels,
)
# 3. Output
scale_shift_values = (
self.scale_shift_table[None, None] + embedded_timestep[:, :, None]
)
shift, scale = scale_shift_values[:, :, 0], scale_shift_values[:, :, 1]
hidden_states = self.norm_out(hidden_states)
# Modulation
hidden_states = hidden_states * (1 + scale) + shift
hidden_states = self.proj_out(hidden_states)
if not return_dict:
return (hidden_states,)
return Transformer3DModelOutput(sample=hidden_states)
def get_absolute_pos_embed(self, grid):
grid_np = grid[0].cpu().numpy()
embed_dim_3d = (
math.ceil((self.inner_dim / 2) * 3)
if self.project_to_2d_pos
else self.inner_dim
)
pos_embed = get_3d_sincos_pos_embed( # (f h w)
embed_dim_3d,
grid_np,
h=int(max(grid_np[1]) + 1),
w=int(max(grid_np[2]) + 1),
f=int(max(grid_np[0] + 1)),
)
return torch.from_numpy(pos_embed).float().unsqueeze(0)

View File

File diff suppressed because it is too large Load Diff

View File

296
ltx_video/schedulers/rf.py Normal file
View File

@@ -0,0 +1,296 @@
import math
from abc import ABC, abstractmethod
from dataclasses import dataclass
from typing import Callable, Optional, Tuple, Union
import torch
from diffusers.configuration_utils import ConfigMixin, register_to_config
from diffusers.schedulers.scheduling_utils import SchedulerMixin
from diffusers.utils import BaseOutput
from torch import Tensor
from ltx_video.utils.torch_utils import append_dims
def simple_diffusion_resolution_dependent_timestep_shift(
samples: Tensor,
timesteps: Tensor,
n: int = 32 * 32,
) -> Tensor:
if len(samples.shape) == 3:
_, m, _ = samples.shape
elif len(samples.shape) in [4, 5]:
m = math.prod(samples.shape[2:])
else:
raise ValueError(
"Samples must have shape (b, t, c), (b, c, h, w) or (b, c, f, h, w)"
)
snr = (timesteps / (1 - timesteps)) ** 2
shift_snr = torch.log(snr) + 2 * math.log(m / n)
shifted_timesteps = torch.sigmoid(0.5 * shift_snr)
return shifted_timesteps
def time_shift(mu: float, sigma: float, t: Tensor):
return math.exp(mu) / (math.exp(mu) + (1 / t - 1) ** sigma)
def get_normal_shift(
n_tokens: int,
min_tokens: int = 1024,
max_tokens: int = 4096,
min_shift: float = 0.95,
max_shift: float = 2.05,
) -> Callable[[float], float]:
m = (max_shift - min_shift) / (max_tokens - min_tokens)
b = min_shift - m * min_tokens
return m * n_tokens + b
def strech_shifts_to_terminal(shifts: Tensor, terminal=0.1):
"""
Stretch a function (given as sampled shifts) so that its final value matches the given terminal value
using the provided formula.
Parameters:
- shifts (Tensor): The samples of the function to be stretched (PyTorch Tensor).
- terminal (float): The desired terminal value (value at the last sample).
Returns:
- Tensor: The stretched shifts such that the final value equals `terminal`.
"""
if shifts.numel() == 0:
raise ValueError("The 'shifts' tensor must not be empty.")
# Ensure terminal value is valid
if terminal <= 0 or terminal >= 1:
raise ValueError("The terminal value must be between 0 and 1 (exclusive).")
# Transform the shifts using the given formula
one_minus_z = 1 - shifts
scale_factor = one_minus_z[-1] / (1 - terminal)
stretched_shifts = 1 - (one_minus_z / scale_factor)
return stretched_shifts
def sd3_resolution_dependent_timestep_shift(
samples: Tensor, timesteps: Tensor, target_shift_terminal: Optional[float] = None
) -> Tensor:
"""
Shifts the timestep schedule as a function of the generated resolution.
In the SD3 paper, the authors empirically how to shift the timesteps based on the resolution of the target images.
For more details: https://arxiv.org/pdf/2403.03206
In Flux they later propose a more dynamic resolution dependent timestep shift, see:
https://github.com/black-forest-labs/flux/blob/87f6fff727a377ea1c378af692afb41ae84cbe04/src/flux/sampling.py#L66
Args:
samples (Tensor): A batch of samples with shape (batch_size, channels, height, width) or
(batch_size, channels, frame, height, width).
timesteps (Tensor): A batch of timesteps with shape (batch_size,).
target_shift_terminal (float): The target terminal value for the shifted timesteps.
Returns:
Tensor: The shifted timesteps.
"""
if len(samples.shape) == 3:
_, m, _ = samples.shape
elif len(samples.shape) in [4, 5]:
m = math.prod(samples.shape[2:])
else:
raise ValueError(
"Samples must have shape (b, t, c), (b, c, h, w) or (b, c, f, h, w)"
)
shift = get_normal_shift(m)
time_shifts = time_shift(shift, 1, timesteps)
if target_shift_terminal is not None: # Stretch the shifts to the target terminal
time_shifts = strech_shifts_to_terminal(time_shifts, target_shift_terminal)
return time_shifts
class TimestepShifter(ABC):
@abstractmethod
def shift_timesteps(self, samples: Tensor, timesteps: Tensor) -> Tensor:
pass
@dataclass
class RectifiedFlowSchedulerOutput(BaseOutput):
"""
Output class for the scheduler's step function output.
Args:
prev_sample (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)` for images):
Computed sample (x_{t-1}) of previous timestep. `prev_sample` should be used as next model input in the
denoising loop.
pred_original_sample (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)` for images):
The predicted denoised sample (x_{0}) based on the model output from the current timestep.
`pred_original_sample` can be used to preview progress or for guidance.
"""
prev_sample: torch.FloatTensor
pred_original_sample: Optional[torch.FloatTensor] = None
class RectifiedFlowScheduler(SchedulerMixin, ConfigMixin, TimestepShifter):
order = 1
@register_to_config
def __init__(
self,
num_train_timesteps=1000,
shifting: Optional[str] = None,
base_resolution: int = 32**2,
target_shift_terminal: Optional[float] = None,
):
super().__init__()
self.init_noise_sigma = 1.0
self.num_inference_steps = None
self.timesteps = self.sigmas = torch.linspace(
1, 1 / num_train_timesteps, num_train_timesteps
)
self.delta_timesteps = self.timesteps - torch.cat(
[self.timesteps[1:], torch.zeros_like(self.timesteps[-1:])]
)
self.shifting = shifting
self.base_resolution = base_resolution
self.target_shift_terminal = target_shift_terminal
def shift_timesteps(self, samples: Tensor, timesteps: Tensor) -> Tensor:
if self.shifting == "SD3":
return sd3_resolution_dependent_timestep_shift(
samples, timesteps, self.target_shift_terminal
)
elif self.shifting == "SimpleDiffusion":
return simple_diffusion_resolution_dependent_timestep_shift(
samples, timesteps, self.base_resolution
)
return timesteps
def set_timesteps(
self,
num_inference_steps: int,
samples: Tensor,
device: Union[str, torch.device] = None,
):
"""
Sets the discrete timesteps used for the diffusion chain. Supporting function to be run before inference.
Args:
num_inference_steps (`int`): The number of diffusion steps used when generating samples.
samples (`Tensor`): A batch of samples with shape.
device (`Union[str, torch.device]`, *optional*): The device to which the timesteps tensor will be moved.
"""
num_inference_steps = min(self.config.num_train_timesteps, num_inference_steps)
timesteps = torch.linspace(1, 1 / num_inference_steps, num_inference_steps).to(
device
)
self.timesteps = self.shift_timesteps(samples, timesteps)
self.delta_timesteps = self.timesteps - torch.cat(
[self.timesteps[1:], torch.zeros_like(self.timesteps[-1:])]
)
self.num_inference_steps = num_inference_steps
self.sigmas = self.timesteps
def scale_model_input(
self, sample: torch.FloatTensor, timestep: Optional[int] = None
) -> torch.FloatTensor:
# pylint: disable=unused-argument
"""
Ensures interchangeability with schedulers that need to scale the denoising model input depending on the
current timestep.
Args:
sample (`torch.FloatTensor`): input sample
timestep (`int`, optional): current timestep
Returns:
`torch.FloatTensor`: scaled input sample
"""
return sample
def step(
self,
model_output: torch.FloatTensor,
timestep: torch.FloatTensor,
sample: torch.FloatTensor,
eta: float = 0.0,
use_clipped_model_output: bool = False,
generator=None,
variance_noise: Optional[torch.FloatTensor] = None,
return_dict: bool = True,
) -> Union[RectifiedFlowSchedulerOutput, Tuple]:
# pylint: disable=unused-argument
"""
Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion
process from the learned model outputs (most often the predicted noise).
Args:
model_output (`torch.FloatTensor`):
The direct output from learned diffusion model.
timestep (`float`):
The current discrete timestep in the diffusion chain.
sample (`torch.FloatTensor`):
A current instance of a sample created by the diffusion process.
eta (`float`):
The weight of noise for added noise in diffusion step.
use_clipped_model_output (`bool`, defaults to `False`):
If `True`, computes "corrected" `model_output` from the clipped predicted original sample. Necessary
because predicted original sample is clipped to [-1, 1] when `self.config.clip_sample` is `True`. If no
clipping has happened, "corrected" `model_output` would coincide with the one provided as input and
`use_clipped_model_output` has no effect.
generator (`torch.Generator`, *optional*):
A random number generator.
variance_noise (`torch.FloatTensor`):
Alternative to generating noise with `generator` by directly providing the noise for the variance
itself. Useful for methods such as [`CycleDiffusion`].
return_dict (`bool`, *optional*, defaults to `True`):
Whether or not to return a [`~schedulers.scheduling_ddim.DDIMSchedulerOutput`] or `tuple`.
Returns:
[`~schedulers.scheduling_utils.RectifiedFlowSchedulerOutput`] or `tuple`:
If return_dict is `True`, [`~schedulers.rf_scheduler.RectifiedFlowSchedulerOutput`] is returned,
otherwise a tuple is returned where the first element is the sample tensor.
"""
if self.num_inference_steps is None:
raise ValueError(
"Number of inference steps is 'None', you need to run 'set_timesteps' after creating the scheduler"
)
if timestep.ndim == 0:
# Global timestep
current_index = (self.timesteps - timestep).abs().argmin()
dt = self.delta_timesteps.gather(0, current_index.unsqueeze(0))
else:
# Timestep per token
assert timestep.ndim == 2
current_index = (
(self.timesteps[:, None, None] - timestep[None]).abs().argmin(dim=0)
)
dt = self.delta_timesteps[current_index]
# Special treatment for zero timestep tokens - set dt to 0 so prev_sample = sample
dt = torch.where(timestep == 0.0, torch.zeros_like(dt), dt)[..., None]
prev_sample = sample - dt * model_output
if not return_dict:
return (prev_sample,)
return RectifiedFlowSchedulerOutput(prev_sample=prev_sample)
def add_noise(
self,
original_samples: torch.FloatTensor,
noise: torch.FloatTensor,
timesteps: torch.FloatTensor,
) -> torch.FloatTensor:
sigmas = timesteps
sigmas = append_dims(sigmas, original_samples.ndim)
alphas = 1 - sigmas
noisy_samples = alphas * original_samples + sigmas * noise
return noisy_samples

View File

View File

@@ -0,0 +1,6 @@
from enum import Enum
class ConditioningMethod(Enum):
UNCONDITIONAL = "unconditional"
FIRST_FRAME = "first_frame"

View File

@@ -0,0 +1,25 @@
import torch
from torch import nn
def append_dims(x: torch.Tensor, target_dims: int) -> torch.Tensor:
"""Appends dimensions to the end of a tensor until it has target_dims dimensions."""
dims_to_append = target_dims - x.ndim
if dims_to_append < 0:
raise ValueError(
f"input has {x.ndim} dims but target_dims is {target_dims}, which is less"
)
elif dims_to_append == 0:
return x
return x[(...,) + (None,) * dims_to_append]
class Identity(nn.Module):
"""A placeholder identity operator that is argument-insensitive."""
def __init__(self, *args, **kwargs) -> None: # pylint: disable=unused-argument
super().__init__()
# pylint: disable=unused-argument
def forward(self, x: torch.Tensor, *args, **kwargs) -> torch.Tensor:
return x

34
pyproject.toml Normal file
View File

@@ -0,0 +1,34 @@
[build-system]
requires = ["setuptools>=42", "wheel"]
build-backend = "setuptools.build_meta"
[project]
name = "ltx-video"
version = "0.1.0"
description = "A package for LTX-Video model"
authors = [
{ name = "Sapir Weissbuch", email = "sapir@lightricks.com" }
]
requires-python = ">=3.10"
readme = "README.md"
classifiers = [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
]
dependencies = [
"torch>=2.1.0",
"diffusers~=0.28.2",
"transformers~=4.44.2",
"sentencepiece~=0.1.96",
"huggingface-hub~=0.25.2",
"einops"
]
[project.optional-dependencies]
# Instead of thinking of them as optional, think of them as specific modes
inference-script = [
"accelerate",
"matplotlib",
"imageio[ffmpeg]"
]