Module stac_fastapi.extensions.third_party.bulk_transactions¶
Bulk transactions extension.
Classes¶
AsyncBaseBulkTransactionsClient¶
class AsyncBaseBulkTransactionsClient(
)
BulkTransactionsClient.
Ancestors (in MRO)¶
- abc.ABC
Methods¶
bulk_item_insert¶
def bulk_item_insert(
self,
items: stac_fastapi.extensions.third_party.bulk_transactions.Items,
**kwargs
) -> str
Bulk creation of items.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
items | None | list of items. | None |
Returns:
Type | Description |
---|---|
None | Message indicating the status of the insert. |
BaseBulkTransactionsClient¶
class BaseBulkTransactionsClient(
)
BulkTransactionsClient.
Ancestors (in MRO)¶
- abc.ABC
Methods¶
bulk_item_insert¶
def bulk_item_insert(
self,
items: stac_fastapi.extensions.third_party.bulk_transactions.Items,
chunk_size: Optional[int] = None,
**kwargs
) -> str
Bulk creation of items.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
items | None | list of items. | None |
chunk_size | None | number of items processed at a time. | None |
Returns:
Type | Description |
---|---|
None | Message indicating the status of the insert. |
BulkTransactionExtension¶
class BulkTransactionExtension(
client: Union[stac_fastapi.extensions.third_party.bulk_transactions.AsyncBaseBulkTransactionsClient, stac_fastapi.extensions.third_party.bulk_transactions.BaseBulkTransactionsClient],
conformance_classes: List[str] = [],
schema_href: Optional[str] = None
)
Bulk Transaction Extension.
Bulk Transaction extension adds the POST
/collections/{collection_id}/bulk_items
endpoint to the application for
efficient bulk insertion of items. The input to this is an object with an
attribute "items", that has a value that is an object with a group of
attributes that are the ids of each Item, and the value is the Item entity.
Optionally, clients can specify a "method" attribute that is either "insert" or "upsert". If "insert", then the items will be inserted if they do not exist, and an error will be returned if they do. If "upsert", then the items will be inserted if they do not exist, and updated if they do. This defaults to "insert".
{
"items": {
"id1": { "type": "Feature", ... },
"id2": { "type": "Feature", ... },
"id3": { "type": "Feature", ... }
},
"method": "insert"
}
Ancestors (in MRO)¶
- stac_fastapi.types.extension.ApiExtension
- abc.ABC
Class variables¶
GET
POST
Methods¶
get_request_model¶
def get_request_model(
self,
verb: Optional[str] = 'GET'
) -> Optional[pydantic.main.BaseModel]
Return the request model for the extension.method.
The model can differ based on HTTP verb
register¶
def register(
self,
app: fastapi.applications.FastAPI
) -> None
Register the extension with a FastAPI application.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
app | None | target FastAPI application. | None |
Returns:
Type | Description |
---|---|
None | None |
BulkTransactionMethod¶
class BulkTransactionMethod(
*args,
**kwds
)
Bulk Transaction Methods.
Ancestors (in MRO)¶
- builtins.str
- enum.Enum
Class variables¶
INSERT
UPSERT
name
value
Static methods¶
maketrans¶
def maketrans(
...
)
Return a translation table usable for str.translate().
If there is only one argument, it must be a dictionary mapping Unicode ordinals (integers) or characters to Unicode ordinals, strings or None. Character keys will be then converted to ordinals. If there are two arguments, they must be strings of equal length, and in the resulting dictionary, each character in x will be mapped to the character at the same position in y. If there is a third argument, it must be a string, whose characters will be mapped to None in the result.
Methods¶
capitalize¶
def capitalize(
self,
/
)
Return a capitalized version of the string.
More specifically, make the first character have upper case and the rest lower case.
casefold¶
def casefold(
self,
/
)
Return a version of the string suitable for caseless comparisons.
center¶
def center(
self,
width,
fillchar=' ',
/
)
Return a centered string of length width.
Padding is done using the specified fill character (default is a space).
count¶
def count(
...
)
S.count(sub[, start[, end]]) -> int
Return the number of non-overlapping occurrences of substring sub in string S[start:end]. Optional arguments start and end are interpreted as in slice notation.
encode¶
def encode(
self,
/,
encoding='utf-8',
errors='strict'
)
Encode the string using the codec registered for encoding.
encoding The encoding in which to encode the string. errors The error handling scheme to use for encoding errors. The default is 'strict' meaning that encoding errors raise a UnicodeEncodeError. Other possible values are 'ignore', 'replace' and 'xmlcharrefreplace' as well as any other name registered with codecs.register_error that can handle UnicodeEncodeErrors.
endswith¶
def endswith(
...
)
S.endswith(suffix[, start[, end]]) -> bool
Return True if S ends with the specified suffix, False otherwise. With optional start, test S beginning at that position. With optional end, stop comparing S at that position. suffix can also be a tuple of strings to try.
expandtabs¶
def expandtabs(
self,
/,
tabsize=8
)
Return a copy where all tab characters are expanded using spaces.
If tabsize is not given, a tab size of 8 characters is assumed.
find¶
def find(
...
)
S.find(sub[, start[, end]]) -> int
Return the lowest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.
Return -1 on failure.
format¶
def format(
...
)
S.format(args, *kwargs) -> str
Return a formatted version of S, using substitutions from args and kwargs. The substitutions are identified by braces ('{' and '}').
format_map¶
def format_map(
...
)
S.format_map(mapping) -> str
Return a formatted version of S, using substitutions from mapping. The substitutions are identified by braces ('{' and '}').
index¶
def index(
...
)
S.index(sub[, start[, end]]) -> int
Return the lowest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.
Raises ValueError when the substring is not found.
isalnum¶
def isalnum(
self,
/
)
Return True if the string is an alpha-numeric string, False otherwise.
A string is alpha-numeric if all characters in the string are alpha-numeric and there is at least one character in the string.
isalpha¶
def isalpha(
self,
/
)
Return True if the string is an alphabetic string, False otherwise.
A string is alphabetic if all characters in the string are alphabetic and there is at least one character in the string.
isascii¶
def isascii(
self,
/
)
Return True if all characters in the string are ASCII, False otherwise.
ASCII characters have code points in the range U+0000-U+007F. Empty string is ASCII too.
isdecimal¶
def isdecimal(
self,
/
)
Return True if the string is a decimal string, False otherwise.
A string is a decimal string if all characters in the string are decimal and there is at least one character in the string.
isdigit¶
def isdigit(
self,
/
)
Return True if the string is a digit string, False otherwise.
A string is a digit string if all characters in the string are digits and there is at least one character in the string.
isidentifier¶
def isidentifier(
self,
/
)
Return True if the string is a valid Python identifier, False otherwise.
Call keyword.iskeyword(s) to test whether string s is a reserved identifier, such as "def" or "class".
islower¶
def islower(
self,
/
)
Return True if the string is a lowercase string, False otherwise.
A string is lowercase if all cased characters in the string are lowercase and there is at least one cased character in the string.
isnumeric¶
def isnumeric(
self,
/
)
Return True if the string is a numeric string, False otherwise.
A string is numeric if all characters in the string are numeric and there is at least one character in the string.
isprintable¶
def isprintable(
self,
/
)
Return True if the string is printable, False otherwise.
A string is printable if all of its characters are considered printable in repr() or if it is empty.
isspace¶
def isspace(
self,
/
)
Return True if the string is a whitespace string, False otherwise.
A string is whitespace if all characters in the string are whitespace and there is at least one character in the string.
istitle¶
def istitle(
self,
/
)
Return True if the string is a title-cased string, False otherwise.
In a title-cased string, upper- and title-case characters may only follow uncased characters and lowercase characters only cased ones.
isupper¶
def isupper(
self,
/
)
Return True if the string is an uppercase string, False otherwise.
A string is uppercase if all cased characters in the string are uppercase and there is at least one cased character in the string.
join¶
def join(
self,
iterable,
/
)
Concatenate any number of strings.
The string whose method is called is inserted in between each given string. The result is returned as a new string.
Example: '.'.join(['ab', 'pq', 'rs']) -> 'ab.pq.rs'
ljust¶
def ljust(
self,
width,
fillchar=' ',
/
)
Return a left-justified string of length width.
Padding is done using the specified fill character (default is a space).
lower¶
def lower(
self,
/
)
Return a copy of the string converted to lowercase.
lstrip¶
def lstrip(
self,
chars=None,
/
)
Return a copy of the string with leading whitespace removed.
If chars is given and not None, remove characters in chars instead.
partition¶
def partition(
self,
sep,
/
)
Partition the string into three parts using the given separator.
This will search for the separator in the string. If the separator is found, returns a 3-tuple containing the part before the separator, the separator itself, and the part after it.
If the separator is not found, returns a 3-tuple containing the original string and two empty strings.
removeprefix¶
def removeprefix(
self,
prefix,
/
)
Return a str with the given prefix string removed if present.
If the string starts with the prefix string, return string[len(prefix):]. Otherwise, return a copy of the original string.
removesuffix¶
def removesuffix(
self,
suffix,
/
)
Return a str with the given suffix string removed if present.
If the string ends with the suffix string and that suffix is not empty, return string[:-len(suffix)]. Otherwise, return a copy of the original string.
replace¶
def replace(
self,
old,
new,
count=-1,
/
)
Return a copy with all occurrences of substring old replaced by new.
count Maximum number of occurrences to replace. -1 (the default value) means replace all occurrences.
If the optional argument count is given, only the first count occurrences are replaced.
rfind¶
def rfind(
...
)
S.rfind(sub[, start[, end]]) -> int
Return the highest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.
Return -1 on failure.
rindex¶
def rindex(
...
)
S.rindex(sub[, start[, end]]) -> int
Return the highest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.
Raises ValueError when the substring is not found.
rjust¶
def rjust(
self,
width,
fillchar=' ',
/
)
Return a right-justified string of length width.
Padding is done using the specified fill character (default is a space).
rpartition¶
def rpartition(
self,
sep,
/
)
Partition the string into three parts using the given separator.
This will search for the separator in the string, starting at the end. If the separator is found, returns a 3-tuple containing the part before the separator, the separator itself, and the part after it.
If the separator is not found, returns a 3-tuple containing two empty strings and the original string.
rsplit¶
def rsplit(
self,
/,
sep=None,
maxsplit=-1
)
Return a list of the substrings in the string, using sep as the separator string.
sep The separator used to split the string.
When set to None (the default value), will split on any whitespace
character (including \n \r \t \f and spaces) and will discard
empty strings from the result.
maxsplit Maximum number of splits. -1 (the default value) means no limit.
Splitting starts at the end of the string and works to the front.
rstrip¶
def rstrip(
self,
chars=None,
/
)
Return a copy of the string with trailing whitespace removed.
If chars is given and not None, remove characters in chars instead.
split¶
def split(
self,
/,
sep=None,
maxsplit=-1
)
Return a list of the substrings in the string, using sep as the separator string.
sep The separator used to split the string.
When set to None (the default value), will split on any whitespace
character (including \n \r \t \f and spaces) and will discard
empty strings from the result.
maxsplit Maximum number of splits. -1 (the default value) means no limit.
Splitting starts at the front of the string and works to the end.
Note, str.split() is mainly useful for data that has been intentionally delimited. With natural text that includes punctuation, consider using the regular expression module.
splitlines¶
def splitlines(
self,
/,
keepends=False
)
Return a list of the lines in the string, breaking at line boundaries.
Line breaks are not included in the resulting list unless keepends is given and true.
startswith¶
def startswith(
...
)
S.startswith(prefix[, start[, end]]) -> bool
Return True if S starts with the specified prefix, False otherwise. With optional start, test S beginning at that position. With optional end, stop comparing S at that position. prefix can also be a tuple of strings to try.
strip¶
def strip(
self,
chars=None,
/
)
Return a copy of the string with leading and trailing whitespace removed.
If chars is given and not None, remove characters in chars instead.
swapcase¶
def swapcase(
self,
/
)
Convert uppercase characters to lowercase and lowercase characters to uppercase.
title¶
def title(
self,
/
)
Return a version of the string where each word is titlecased.
More specifically, words start with uppercased characters and all remaining cased characters have lower case.
translate¶
def translate(
self,
table,
/
)
Replace each character in the string using the given translation table.
table Translation table, which must be a mapping of Unicode ordinals to Unicode ordinals, strings, or None.
The table must implement lookup/indexing via getitem, for instance a dictionary or list. If this operation raises LookupError, the character is left untouched. Characters mapped to None are deleted.
upper¶
def upper(
self,
/
)
Return a copy of the string converted to uppercase.
zfill¶
def zfill(
self,
width,
/
)
Pad a numeric string with zeros on the left, to fill a field of the given width.
The string is never truncated.
Items¶
class Items(
/,
**data: 'Any'
)
A group of STAC Item objects, in the form of a dictionary from Item.id -> Item.
Ancestors (in MRO)¶
- pydantic.main.BaseModel
Class variables¶
model_computed_fields
model_config
model_fields
Static methods¶
construct¶
def construct(
_fields_set: 'set[str] | None' = None,
**values: 'Any'
) -> 'Self'
from_orm¶
def from_orm(
obj: 'Any'
) -> 'Self'
model_construct¶
def model_construct(
_fields_set: 'set[str] | None' = None,
**values: 'Any'
) -> 'Self'
Creates a new instance of the Model
class with validated data.
Creates a new model setting __dict__
and __pydantic_fields_set__
from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Note
model_construct()
generally respects the model_config.extra
setting on the provided model.
That is, if model_config.extra == 'allow'
, then all extra passed values are added to the model instance's __dict__
and __pydantic_extra__
fields. If model_config.extra == 'ignore'
(the default), then all extra passed values are ignored.
Because no validation is performed with a call to model_construct()
, having model_config.extra == 'forbid'
does not result in
an error if extra values are passed, but they will be ignored.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
_fields_set | None | A set of field names that were originally explicitly set during instantiation. If provided, this is directly used for the [ model_fields_set ][pydantic.BaseModel.model_fields_set] attribute.Otherwise, the field names from the values argument will be used. |
None |
values | None | Trusted or pre-validated data dictionary. | None |
Returns:
Type | Description |
---|---|
None | A new instance of the Model class with validated data. |
model_json_schema¶
def model_json_schema(
by_alias: 'bool' = True,
ref_template: 'str' = '#/$defs/{model}',
schema_generator: 'type[GenerateJsonSchema]' = <class 'pydantic.json_schema.GenerateJsonSchema'>,
mode: 'JsonSchemaMode' = 'validation'
) -> 'dict[str, Any]'
Generates a JSON schema for a model class.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
by_alias | None | Whether to use attribute aliases or not. | None |
ref_template | None | The reference template. | None |
schema_generator | None | To override the logic used to generate the JSON schema, as a subclass ofGenerateJsonSchema with your desired modifications |
None |
mode | None | The mode in which to generate the schema. | None |
Returns:
Type | Description |
---|---|
None | The JSON schema for the given model class. |
model_parametrized_name¶
def model_parametrized_name(
params: 'tuple[type[Any], ...]'
) -> 'str'
Compute the class name for parametrizations of generic classes.
This method can be overridden to achieve a custom naming scheme for generic BaseModels.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
params | None | Tuple of types of the class. Given a generic classModel with 2 type variables and a concrete model Model[str, int] ,the value (str, int) would be passed to params . |
None |
Returns:
Type | Description |
---|---|
None | String representing the new class where params are passed to cls as type variables. |
Raises:
Type | Description |
---|---|
TypeError | Raised when trying to generate concrete names for non-generic models. |
model_rebuild¶
def model_rebuild(
*,
force: 'bool' = False,
raise_errors: 'bool' = True,
_parent_namespace_depth: 'int' = 2,
_types_namespace: 'dict[str, Any] | None' = None
) -> 'bool | None'
Try to rebuild the pydantic-core schema for the model.
This may be necessary when one of the annotations is a ForwardRef which could not be resolved during the initial attempt to build the schema, and automatic rebuilding fails.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
force | None | Whether to force the rebuilding of the model schema, defaults to False . |
None |
raise_errors | None | Whether to raise errors, defaults to True . |
None |
_parent_namespace_depth | None | The depth level of the parent namespace, defaults to 2. | None |
_types_namespace | None | The types namespace, defaults to None . |
None |
Returns:
Type | Description |
---|---|
None | Returns None if the schema is already "complete" and rebuilding was not required.If rebuilding was required, returns True if rebuilding was successful, otherwise False . |
model_validate¶
def model_validate(
obj: 'Any',
*,
strict: 'bool | None' = None,
from_attributes: 'bool | None' = None,
context: 'Any | None' = None
) -> 'Self'
Validate a pydantic model instance.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
obj | None | The object to validate. | None |
strict | None | Whether to enforce types strictly. | None |
from_attributes | None | Whether to extract data from object attributes. | None |
context | None | Additional context to pass to the validator. | None |
Returns:
Type | Description |
---|---|
None | The validated model instance. |
Raises:
Type | Description |
---|---|
ValidationError | If the object could not be validated. |
model_validate_json¶
def model_validate_json(
json_data: 'str | bytes | bytearray',
*,
strict: 'bool | None' = None,
context: 'Any | None' = None
) -> 'Self'
Usage docs: docs.pydantic.dev/2.9/concepts/json/#json-parsing
Validate the given JSON data against the Pydantic model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
json_data | None | The JSON data to validate. | None |
strict | None | Whether to enforce types strictly. | None |
context | None | Extra variables to pass to the validator. | None |
Returns:
Type | Description |
---|---|
None | The validated Pydantic model. |
Raises:
Type | Description |
---|---|
ValidationError | If json_data is not a JSON string or the object could not be validated. |
model_validate_strings¶
def model_validate_strings(
obj: 'Any',
*,
strict: 'bool | None' = None,
context: 'Any | None' = None
) -> 'Self'
Validate the given object with string data against the Pydantic model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
obj | None | The object containing string data to validate. | None |
strict | None | Whether to enforce types strictly. | None |
context | None | Extra variables to pass to the validator. | None |
Returns:
Type | Description |
---|---|
None | The validated Pydantic model. |
parse_file¶
def parse_file(
path: 'str | Path',
*,
content_type: 'str | None' = None,
encoding: 'str' = 'utf8',
proto: 'DeprecatedParseProtocol | None' = None,
allow_pickle: 'bool' = False
) -> 'Self'
parse_obj¶
def parse_obj(
obj: 'Any'
) -> 'Self'
parse_raw¶
def parse_raw(
b: 'str | bytes',
*,
content_type: 'str | None' = None,
encoding: 'str' = 'utf8',
proto: 'DeprecatedParseProtocol | None' = None,
allow_pickle: 'bool' = False
) -> 'Self'
schema¶
def schema(
by_alias: 'bool' = True,
ref_template: 'str' = '#/$defs/{model}'
) -> 'Dict[str, Any]'
schema_json¶
def schema_json(
*,
by_alias: 'bool' = True,
ref_template: 'str' = '#/$defs/{model}',
**dumps_kwargs: 'Any'
) -> 'str'
update_forward_refs¶
def update_forward_refs(
**localns: 'Any'
) -> 'None'
validate¶
def validate(
value: 'Any'
) -> 'Self'
Instance variables¶
model_extra
Get extra fields set during validation.
model_fields_set
Returns the set of fields that have been explicitly set on this model instance.
Methods¶
copy¶
def copy(
self,
*,
include: 'AbstractSetIntStr | MappingIntStrAny | None' = None,
exclude: 'AbstractSetIntStr | MappingIntStrAny | None' = None,
update: 'Dict[str, Any] | None' = None,
deep: 'bool' = False
) -> 'Self'
Returns a copy of the model.
Deprecated
This method is now deprecated; use model_copy
instead.
If you need include
or exclude
, use:
data = self.model_dump(include=include, exclude=exclude, round_trip=True)
data = {**data, **(update or {})}
copied = self.model_validate(data)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
include | None | Optional set or mapping specifying which fields to include in the copied model. | None |
exclude | None | Optional set or mapping specifying which fields to exclude in the copied model. | None |
update | None | Optional dictionary of field-value pairs to override field values in the copied model. | None |
deep | None | If True, the values of fields that are Pydantic models will be deep-copied. | None |
Returns:
Type | Description |
---|---|
None | A copy of the model with included, excluded and updated fields as specified. |
dict¶
def dict(
self,
*,
include: 'IncEx | None' = None,
exclude: 'IncEx | None' = None,
by_alias: 'bool' = False,
exclude_unset: 'bool' = False,
exclude_defaults: 'bool' = False,
exclude_none: 'bool' = False
) -> 'Dict[str, Any]'
json¶
def json(
self,
*,
include: 'IncEx | None' = None,
exclude: 'IncEx | None' = None,
by_alias: 'bool' = False,
exclude_unset: 'bool' = False,
exclude_defaults: 'bool' = False,
exclude_none: 'bool' = False,
encoder: 'Callable[[Any], Any] | None' = PydanticUndefined,
models_as_dict: 'bool' = PydanticUndefined,
**dumps_kwargs: 'Any'
) -> 'str'
model_copy¶
def model_copy(
self,
*,
update: 'dict[str, Any] | None' = None,
deep: 'bool' = False
) -> 'Self'
Usage docs: docs.pydantic.dev/2.9/concepts/serialization/#model_copy
Returns a copy of the model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
update | None | Values to change/add in the new model. Note: the data is not validated before creating the new model. You should trust this data. |
None |
deep | None | Set to True to make a deep copy of the model. |
None |
Returns:
Type | Description |
---|---|
None | New model instance. |
model_dump¶
def model_dump(
self,
*,
mode: "Literal['json', 'python'] | str" = 'python',
include: 'IncEx | None' = None,
exclude: 'IncEx | None' = None,
context: 'Any | None' = None,
by_alias: 'bool' = False,
exclude_unset: 'bool' = False,
exclude_defaults: 'bool' = False,
exclude_none: 'bool' = False,
round_trip: 'bool' = False,
warnings: "bool | Literal['none', 'warn', 'error']" = True,
serialize_as_any: 'bool' = False
) -> 'dict[str, Any]'
Usage docs: docs.pydantic.dev/2.9/concepts/serialization/#modelmodel_dump
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
mode | None | The mode in which to_python should run.If mode is 'json', the output will only contain JSON serializable types. If mode is 'python', the output may contain non-JSON-serializable Python objects. |
None |
include | None | A set of fields to include in the output. | None |
exclude | None | A set of fields to exclude from the output. | None |
context | None | Additional context to pass to the serializer. | None |
by_alias | None | Whether to use the field's alias in the dictionary key if defined. | None |
exclude_unset | None | Whether to exclude fields that have not been explicitly set. | None |
exclude_defaults | None | Whether to exclude fields that are set to their default value. | None |
exclude_none | None | Whether to exclude fields that have a value of None . |
None |
round_trip | None | If True, dumped values should be valid as input for non-idempotent types such as Json[T]. | None |
warnings | None | How to handle serialization errors. False/"none" ignores them, True/"warn" logs errors, "error" raises a [ PydanticSerializationError ][pydantic_core.PydanticSerializationError]. |
None |
serialize_as_any | None | Whether to serialize fields with duck-typing serialization behavior. | None |
Returns:
Type | Description |
---|---|
None | A dictionary representation of the model. |
model_dump_json¶
def model_dump_json(
self,
*,
indent: 'int | None' = None,
include: 'IncEx | None' = None,
exclude: 'IncEx | None' = None,
context: 'Any | None' = None,
by_alias: 'bool' = False,
exclude_unset: 'bool' = False,
exclude_defaults: 'bool' = False,
exclude_none: 'bool' = False,
round_trip: 'bool' = False,
warnings: "bool | Literal['none', 'warn', 'error']" = True,
serialize_as_any: 'bool' = False
) -> 'str'
Usage docs: docs.pydantic.dev/2.9/concepts/serialization/#modelmodel_dump_json
Generates a JSON representation of the model using Pydantic's to_json
method.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
indent | None | Indentation to use in the JSON output. If None is passed, the output will be compact. | None |
include | None | Field(s) to include in the JSON output. | None |
exclude | None | Field(s) to exclude from the JSON output. | None |
context | None | Additional context to pass to the serializer. | None |
by_alias | None | Whether to serialize using field aliases. | None |
exclude_unset | None | Whether to exclude fields that have not been explicitly set. | None |
exclude_defaults | None | Whether to exclude fields that are set to their default value. | None |
exclude_none | None | Whether to exclude fields that have a value of None . |
None |
round_trip | None | If True, dumped values should be valid as input for non-idempotent types such as Json[T]. | None |
warnings | None | How to handle serialization errors. False/"none" ignores them, True/"warn" logs errors, "error" raises a [ PydanticSerializationError ][pydantic_core.PydanticSerializationError]. |
None |
serialize_as_any | None | Whether to serialize fields with duck-typing serialization behavior. | None |
Returns:
Type | Description |
---|---|
None | A JSON string representation of the model. |
model_post_init¶
def model_post_init(
self,
_BaseModel__context: 'Any'
) -> 'None'
Override this method to perform additional initialization after __init__
and model_construct
.
This is useful if you want to do some validation that requires the entire model to be initialized.