78 Commits

Author SHA1 Message Date
fe9984e2d8 type-fix
All checks were successful
Deploy on push / deploy (push) Successful in 43s
2025-03-22 18:39:14 +03:00
369ff757b0 [0.4.16] - 2025-03-22
All checks were successful
Deploy on push / deploy (push) Successful in 6s
- Added hierarchical comments pagination:
  - Created new GraphQL query `load_comments_branch` for efficient loading of hierarchical comments
  - Ability to load root comments with their first N replies
  - Added pagination for both root and child comments
  - Using existing `commented` field in `Stat` type to display number of replies
  - Added special `first_replies` field to store first replies to a comment
  - Optimized SQL queries for efficient loading of comment hierarchies
  - Implemented flexible comment sorting system (by time, rating)
2025-03-22 13:37:43 +03:00
615f1fe468 topics+authors-reimplemented-cache
All checks were successful
Deploy on push / deploy (push) Successful in 5s
2025-03-22 11:47:19 +03:00
86ddb50cb8 topics caching upgrade 2025-03-22 09:31:53 +03:00
31c32143d0 reaction-to-feature-fix
All checks were successful
Deploy on push / deploy (push) Successful in 56s
2025-03-21 12:34:10 +03:00
b63c387806 jsonfix3
All checks were successful
Deploy on push / deploy (push) Successful in 56s
2025-03-20 12:52:44 +03:00
dbbfd42e08 redeploy
All checks were successful
Deploy on push / deploy (push) Successful in 57s
2025-03-20 12:35:55 +03:00
47e12b4452 fx2
Some checks failed
Deploy on push / deploy (push) Failing after 16s
2025-03-20 12:33:27 +03:00
e1a1b4dc7d fx
All checks were successful
Deploy on push / deploy (push) Successful in 44s
2025-03-20 12:25:18 +03:00
ca01181f37 jsonfix
All checks were successful
Deploy on push / deploy (push) Successful in 44s
2025-03-20 12:24:30 +03:00
0aff77eda6 portfix
All checks were successful
Deploy on push / deploy (push) Successful in 49s
2025-03-20 12:13:14 +03:00
8a95aa1209 jsonload-fix
All checks were successful
Deploy on push / deploy (push) Successful in 45s
2025-03-20 12:05:58 +03:00
a4a3c35f4d lesscode
All checks were successful
Deploy on push / deploy (push) Successful in 46s
2025-03-20 12:04:47 +03:00
edece36ecc jsonenc-fix
All checks were successful
Deploy on push / deploy (push) Successful in 12s
2025-03-20 11:59:43 +03:00
247fc98760 cachedep-fix+orjson+fmt
All checks were successful
Deploy on push / deploy (push) Successful in 1m16s
2025-03-20 11:55:21 +03:00
a1781b3800 depfix
All checks were successful
Deploy on push / deploy (push) Successful in 1m4s
2025-03-20 11:36:12 +03:00
450c73c060 nothreads
All checks were successful
Deploy on push / deploy (push) Successful in 47s
2025-03-20 11:30:36 +03:00
3a1924279f redeploy
All checks were successful
Deploy on push / deploy (push) Successful in 49s
2025-03-20 11:23:37 +03:00
094e7e6fe2 granian-fix
Some checks failed
Deploy on push / deploy (push) Failing after 7s
2025-03-20 11:19:29 +03:00
ae48a18536 comment-delete-handling-patch
All checks were successful
Deploy on push / deploy (push) Successful in 1m15s
2025-03-20 11:01:39 +03:00
354bda0efa drafts-fix
All checks were successful
Deploy on push / deploy (push) Successful in 59s
2025-03-13 22:21:43 +03:00
856f4ffc85 i 2025-03-09 21:01:52 +03:00
20eba36c65 create-draft-fix
All checks were successful
Deploy on push / deploy (push) Successful in 1m47s
2025-02-27 16:16:41 +03:00
8cd0c8ea4c less-logs
All checks were successful
Deploy on push / deploy (push) Successful in 56s
2025-02-19 00:23:42 +03:00
2939cd8adc pyright-conf
All checks were successful
Deploy on push / deploy (push) Successful in 57s
2025-02-19 00:21:51 +03:00
41d8253094 lesslogs
All checks were successful
Deploy on push / deploy (push) Successful in 56s
2025-02-14 21:49:21 +03:00
5263d1657e 0.4.11-b
All checks were successful
Deploy on push / deploy (push) Successful in 56s
2025-02-12 22:34:57 +03:00
1de3d163c1 0.4.11-create_draft-fix
All checks were successful
Deploy on push / deploy (push) Successful in 57s
2025-02-12 21:59:05 +03:00
d3ed335fde main_topic-fix7-debug
All checks were successful
Deploy on push / deploy (push) Successful in 55s
2025-02-12 20:04:06 +03:00
f84be7b11b main_topic-fix7
All checks were successful
Deploy on push / deploy (push) Successful in 56s
2025-02-12 19:33:02 +03:00
b011c0fd48 main_topic-fix6
All checks were successful
Deploy on push / deploy (push) Successful in 54s
2025-02-12 19:21:21 +03:00
fe661a5008 main_topic-json-fix
All checks were successful
Deploy on push / deploy (push) Successful in 56s
2025-02-12 02:23:51 +03:00
e97823f99c main_topic-fix4
All checks were successful
Deploy on push / deploy (push) Successful in 56s
2025-02-12 00:55:55 +03:00
a9dd593ac8 main_topic-fix3
All checks were successful
Deploy on push / deploy (push) Successful in 56s
2025-02-12 00:47:39 +03:00
1585e55342 main_topic-fix2
All checks were successful
Deploy on push / deploy (push) Successful in 57s
2025-02-12 00:39:25 +03:00
52b608da99 main_topic-fix
All checks were successful
Deploy on push / deploy (push) Successful in 57s
2025-02-12 00:31:18 +03:00
5a4f75537d debug more
All checks were successful
Deploy on push / deploy (push) Successful in 58s
2025-02-11 23:47:54 +03:00
ce4a401c1a minor-debug
All checks were successful
Deploy on push / deploy (push) Successful in 57s
2025-02-11 23:44:29 +03:00
7814e3d64d 0.4.10
All checks were successful
Deploy on push / deploy (push) Successful in 57s
2025-02-11 12:40:55 +03:00
9191d83f84 usermoved
All checks were successful
Deploy on push / deploy (push) Successful in 56s
2025-02-11 12:24:02 +03:00
5d87035885 0.4.10-a
All checks were successful
Deploy on push / deploy (push) Successful in 44s
2025-02-11 12:00:35 +03:00
25b61c6b29 simple-dockerfile
All checks were successful
Deploy on push / deploy (push) Successful in 1m41s
2025-02-10 19:10:13 +03:00
9671ef2508 author-stat-fix
All checks were successful
Deploy on push / deploy (push) Successful in 1m22s
2025-02-10 18:38:26 +03:00
759520f024 initdb-fix
All checks were successful
Deploy on push / deploy (push) Successful in 1m13s
2025-02-10 18:15:54 +03:00
a84d8a0c7e 0.4.9-c
All checks were successful
Deploy on push / deploy (push) Successful in 7s
2025-02-10 18:04:08 +03:00
20173f7d1c trigdeploy
All checks were successful
Deploy on push / deploy (push) Successful in 55s
2025-02-10 11:30:58 +03:00
4a835bbfba 0.4.9-b
All checks were successful
Deploy on push / deploy (push) Successful in 2m38s
2025-02-09 22:26:50 +03:00
37a9a284ef 0.4.9-drafts 2025-02-09 17:18:01 +03:00
dce05342df fmt
All checks were successful
Deploy on push / deploy (push) Successful in 58s
2025-02-04 15:27:59 +03:00
56db33d7f1 get_my_rates_comments-fix
All checks were successful
Deploy on push / deploy (push) Successful in 55s
2025-02-04 02:53:01 +03:00
40b4703b1a get_cached_topic_followers-fix
All checks were successful
Deploy on push / deploy (push) Successful in 55s
2025-02-04 01:40:00 +03:00
747d550d80 fix-revalidation2
All checks were successful
Deploy on push / deploy (push) Successful in 58s
2025-02-04 00:08:25 +03:00
84de0c5538 fix-revalidation
All checks were successful
Deploy on push / deploy (push) Successful in 59s
2025-02-04 00:01:54 +03:00
33ddfc6c31 after-handlers-cache
All checks were successful
Deploy on push / deploy (push) Successful in 57s
2025-02-03 23:22:45 +03:00
26b862d601 more-revalidation
All checks were successful
Deploy on push / deploy (push) Successful in 1m32s
2025-02-03 23:16:50 +03:00
9fe5fea238 editor-fix
All checks were successful
Deploy on push / deploy (push) Successful in 58s
2025-02-03 19:06:00 +03:00
0347b6f5ff logs-update-shout-5
All checks were successful
Deploy on push / deploy (push) Successful in 59s
2025-02-02 21:57:51 +03:00
ffb75e53f7 logs-update-shout-4
All checks were successful
Deploy on push / deploy (push) Successful in 58s
2025-02-02 21:55:22 +03:00
582ba75643 logs-update-shout-3
All checks were successful
Deploy on push / deploy (push) Successful in 58s
2025-02-02 21:49:28 +03:00
2db1da3194 logs-update-shout-2
All checks were successful
Deploy on push / deploy (push) Successful in 58s
2025-02-02 21:45:24 +03:00
fd6b0ce5fd logs-update-shout
All checks were successful
Deploy on push / deploy (push) Successful in 58s
2025-02-02 21:41:03 +03:00
Stepan Vladovskiy
670a477f9a debug: ok, moved map on tgop layaer of nginx. now this version without map
All checks were successful
Deploy on push / deploy (push) Successful in 57s
2025-01-29 14:38:33 -03:00
Stepan Vladovskiy
46945197d9 debug: with hardcoded domain testing.dscrs.site and in default non for understanding
All checks were successful
Deploy on push / deploy (push) Successful in 1m0s
2025-01-29 13:59:47 -03:00
Stepan Vladovskiy
4ebc64d13a fix: so, the problem can be somewhere else, becasue map is working fine. And we are trying to find where it is ovveriting issue. Modified main.py with some extra rules. Maybe it is helps
All checks were successful
Deploy on push / deploy (push) Successful in 55s
2025-01-28 20:03:56 -03:00
Stepan Vladovskiy
bc9560e56e feat: safe version. debug is not give results. this is simple version. In case of code beuty can be rewrite with previos nodebug version
All checks were successful
Deploy on push / deploy (push) Successful in 55s
2025-01-28 19:38:19 -03:00
Stepan Vladovskiy
38f5aab9e0 debug: not safe version. back to safe map function
All checks were successful
Deploy on push / deploy (push) Successful in 55s
2025-01-28 19:27:24 -03:00
Stepan Vladovskiy
95f49a7ca5 debug: rewrite nginx file to use it without variables logic
All checks were successful
Deploy on push / deploy (push) Successful in 57s
2025-01-28 19:23:02 -03:00
Stepan Vladovskiy
cd8f5977af debug: sv just in case for testing maping issue and trying to find place with filter maybe, in this option if dscrs.site origin then allow is discours.io
All checks were successful
Deploy on push / deploy (push) Successful in 57s
2025-01-28 18:57:27 -03:00
Stepan Vladovskiy
a218d1309b debug: no force optins and simpl regexp logic
All checks were successful
Deploy on push / deploy (push) Successful in 59s
2025-01-28 18:24:10 -03:00
Stepan Vladovskiy
113d4807b2 feat:sv with force flag
All checks were successful
Deploy on push / deploy (push) Successful in 1m2s
2025-01-28 17:55:41 -03:00
Stepan Vladovskiy
9bc3cdbd0b debug: sv clean testing in cors polici maping because it is redundant and add Allow origin heade custom log
Some checks failed
Deploy on push / deploy (push) Failing after 7s
2025-01-28 17:48:59 -03:00
Stepan Vladovskiy
79e6402df3 debug: added for dscrs.site separate rule in nginx config map part 2025-01-28 17:48:59 -03:00
Stepan Vladovskiy
ec2e9444e3 debug: nginx conf sigil with custom logs with headers and backslash in dscrs.site 2025-01-28 17:48:59 -03:00
Stepan Vladovskiy
a86a2fee85 debug: nginx conf sigil file withou custom log and add for domain dscrs.site determinating backsplash 2025-01-28 17:48:59 -03:00
Stepan Vladovskiy
aec67b9db8 debug: layer with logs added for debug allow_orrigin missing for dscrs.site domain fix back slash 2025-01-28 17:48:59 -03:00
Stepan Vladovskiy
0bbe1d428a debug: layer with logs added for debug allow_orrigin missing for dscrs.site domain 2025-01-28 17:48:59 -03:00
Stepan Vladovskiy
a05f0afa8b debug: layer with logs added for debug allow_orrigin missing for dscrs.site domain 2025-01-28 17:48:59 -03:00
5e2842774a media-field-workarounds
Some checks failed
Deploy on push / deploy (push) Failing after 8s
2025-01-28 15:38:10 +03:00
58 changed files with 3914 additions and 765 deletions

2
.gitignore vendored
View File

@@ -160,3 +160,5 @@ views.json
*.pem
*.key
*.crt
*cache.json
.cursor

View File

@@ -1,3 +1,106 @@
#### [0.4.16] - 2025-03-22
- Added hierarchical comments pagination:
- Created new GraphQL query `load_comments_branch` for efficient loading of hierarchical comments
- Ability to load root comments with their first N replies
- Added pagination for both root and child comments
- Using existing `commented` field in `Stat` type to display number of replies
- Added special `first_replies` field to store first replies to a comment
- Optimized SQL queries for efficient loading of comment hierarchies
- Implemented flexible comment sorting system (by time, rating)
#### [0.4.15] - 2025-03-22
- Upgraded caching system described `docs/caching.md`
- Module `cache/memorycache.py` removed
- Enhanced caching system with backward compatibility:
- Unified cache key generation with support for existing naming patterns
- Improved Redis operation function with better error handling
- Updated precache module to use consistent Redis interface
- Integrated revalidator with the invalidation system for better performance
- Added comprehensive documentation for the caching system
- Enhanced cached_query to support template-based cache keys
- Standardized error handling across all cache operations
- Optimized cache invalidation system:
- Added targeted invalidation for individual entities (authors, topics)
- Improved revalidation manager with individual object processing
- Implemented batched processing for high-volume invalidations
- Reduced Redis operations by using precise key invalidation instead of prefix-based wipes
- Added special handling for slug changes in topics
- Unified caching system for all models:
- Implemented abstract functions `cache_data`, `get_cached_data` and `invalidate_cache_by_prefix`
- Added `cached_query` function for unified approach to query caching
- Updated resolvers `author.py` and `topic.py` to use the new caching API
- Improved logging for cache operations to simplify debugging
- Optimized Redis memory usage through key format unification
- Improved caching and sorting in Topic and Author modules:
- Added support for dictionary sorting parameters in `by` for both modules
- Optimized cache key generation for stable behavior with various parameters
- Enhanced sorting logic with direction support and arbitrary fields
- Added `by` parameter support in the API for getting topics by community
- Performance optimizations for author-related queries:
- Added SQLAlchemy-managed indexes to `Author`, `AuthorFollower`, `AuthorRating` and `AuthorBookmark` models
- Implemented persistent Redis caching for author queries without TTL (invalidated only on changes)
- Optimized author retrieval with separate endpoints:
- `get_authors_all` - returns all non-deleted authors without statistics
- `get_authors_paginated` - returns authors with statistics and pagination support
- `load_authors_by` - optimized to use caching and efficient sorting
- Improved SQL queries with optimized JOIN conditions and efficient filtering
- Added pre-aggregation of statistics (shouts count, followers count) in single efficient queries
- Implemented robust cache invalidation on author updates
- Created necessary indexes for author lookups by user ID, slug, and timestamps
#### [0.4.14] - 2025-03-21
- Significant performance improvements for topic queries:
- Added database indexes to optimize JOIN operations
- Implemented persistent Redis caching for topic queries (no TTL, invalidated only on changes)
- Optimized topic retrieval with separate endpoints for different use cases:
- `get_topics_all` - returns all topics without statistics for lightweight listing
- `get_topics_paginated` - returns topics with statistics and pagination support
- `get_topics_by_community` - adds pagination and optimized filtering by community
- Added SQLAlchemy-managed indexes directly in ORM models for automatic schema maintenance
- Created `sync_indexes()` function for automatic index synchronization during app startup
- Reduced database load by pre-aggregating statistics in optimized SQL queries
- Added robust cache invalidation on topic create/update/delete operations
- Improved query optimization with proper JOIN conditions and specific partial indexes
#### [0.4.13] - 2025-03-20
- Fixed Topic objects serialization error in cache/memorycache.py
- Improved CustomJSONEncoder to support SQLAlchemy models with dict() method
- Enhanced error handling in cache_on_arguments decorator
- Modified `load_reactions_by` to include deleted reactions when `include_deleted=true` for proper comment tree building
- Fixed featured/unfeatured logic in reaction processing:
- Dislike reactions now properly take precedence over likes
- Featured status now requires more than 4 likes from users with featured articles
- Removed unnecessary filters for deleted reactions since rating reactions are physically deleted
- Author's featured status now based on having non-deleted articles with featured_at
#### [0.4.12] - 2025-03-19
- `delete_reaction` detects comments and uses `deleted_at` update
- `check_to_unfeature` etc. update
- dogpile dep in `services/memorycache.py` optimized
#### [0.4.11] - 2025-02-12
- `create_draft` resolver requires draft_id fixed
- `create_draft` resolver defaults body and title fields to empty string
#### [0.4.9] - 2025-02-09
- `Shout.draft` field added
- `Draft` entity added
- `create_draft`, `update_draft`, `delete_draft` mutations and resolvers added
- `create_shout`, `update_shout`, `delete_shout` mutations removed from GraphQL API
- `load_drafts` resolver implemented
- `publish_` and `unpublish_` mutations and resolvers added
- `create_`, `update_`, `delete_` mutations and resolvers added for `Draft` entity
- tests with pytest for original auth, shouts, drafts
- `Dockerfile` and `pyproject.toml` removed for the simplicity: `Procfile` and `requirements.txt`
#### [0.4.8] - 2025-02-03
- `Reaction.deleted_at` filter on `update_reaction` resolver added
- `triggers` module updated with `after_shout_handler`, `after_reaction_handler` for cache revalidation
- `after_shout_handler`, `after_reaction_handler` now also handle `deleted_at` field
- `get_cached_topic_followers` fixed
- `get_my_rates_comments` fixed
#### [0.4.7]
- `get_my_rates_shouts` resolver added with:
- `shout_id` and `my_rate` fields in response
@@ -205,22 +308,4 @@
#### [0.2.7]
- `loadFollowedReactions` now with `login_required`
- notifier service api draft
- added `shout` visibility kind in schema
- community isolated from author in orm
#### [0.2.6]
- redis connection pool
- auth context fixes
- communities orm, resolvers, schema
#### [0.2.5]
- restructured
- all users have their profiles as authors in core
- `gittask`, `inbox` and `auth` logics removed
- `settings` moved to base and now smaller
- new outside auth schema
- removed `gittask`, `auth`, `inbox`, `migration`
- `loadFollowedReactions` now with `

View File

@@ -1,25 +1,18 @@
FROM python:3.12-alpine
FROM python:slim
# Update package lists and install necessary dependencies
RUN apk update && \
apk add --no-cache build-base icu-data-full curl python3-dev musl-dev && \
curl -sSL https://install.python-poetry.org | python
RUN apt-get update && apt-get install -y \
postgresql-client \
curl \
&& rm -rf /var/lib/apt/lists/*
# Set working directory
WORKDIR /app
# Copy only the pyproject.toml file initially
COPY pyproject.toml /app/
COPY requirements.txt .
# Install poetry and dependencies
RUN pip install poetry && \
poetry config virtualenvs.create false && \
poetry install --no-root --only main
RUN pip install -r requirements.txt
# Copy the rest of the files
COPY . /app
COPY . .
# Expose the port
EXPOSE 8000
CMD ["python", "server.py"]
CMD ["python", "-m", "granian", "main:app", "--interface", "asgi", "--host", "0.0.0.0", "--port", "8000"]

View File

@@ -1 +0,0 @@
web: python server.py

View File

@@ -37,36 +37,43 @@ Backend service providing GraphQL API for content management system with reactio
## Development
### Setup
Start API server with `dev` keyword added and `mkcert` installed:
### Prepare environment:
```shell
mkdir .venv
python3.12 -m venv .venv
poetry env use .venv/bin/python3.12
poetry update
python3.12 -m venv venv
source venv/bin/activate
```
### Run server
First, certifcates are required to run the server.
```shell
mkcert -install
mkcert localhost
poetry run server.py dev
```
Then, run the server:
```shell
python server.py dev
```
### Useful Commands
```shell
# Linting and import sorting
poetry run ruff check . --fix --select I
ruff check . --fix --select I
# Code formatting
poetry run ruff format . --line-length=120
ruff format . --line-length=120
# Run tests
poetry run pytest
pytest
# Type checking
poetry run mypy .
mypy .
```
### Code Style

View File

@@ -1,14 +1,15 @@
from binascii import hexlify
from hashlib import sha256
# from base.exceptions import InvalidPassword, InvalidToken
from base.orm import local_session
from jwt import DecodeError, ExpiredSignatureError
from passlib.hash import bcrypt
from auth.exceptions import ExpiredToken, InvalidToken
from auth.jwtcodec import JWTCodec
from auth.tokenstorage import TokenStorage
from orm import User
from orm.user import User
# from base.exceptions import InvalidPassword, InvalidToken
from services.db import local_session
class Password:
@@ -79,10 +80,10 @@ class Identity:
if not await TokenStorage.exist(f"{payload.user_id}-{payload.username}-{token}"):
# raise InvalidToken("Login token has expired, please login again")
return {"error": "Token has expired"}
except ExpiredSignatureError:
except ExpiredToken:
# raise InvalidToken("Login token has expired, please try again")
return {"error": "Token has expired"}
except DecodeError:
except InvalidToken:
# raise InvalidToken("token format error") from e
return {"error": "Token format error"}
with local_session() as session:

View File

@@ -1,9 +1,8 @@
from datetime import datetime, timedelta, timezone
from base.redis import redis
from validations.auth import AuthInput
from auth.jwtcodec import JWTCodec
from auth.validations import AuthInput
from services.redis import redis
from settings import ONETIME_TOKEN_LIFE_SPAN, SESSION_TOKEN_LIFE_SPAN

View File

@@ -1,6 +1,15 @@
import time
from sqlalchemy import JSON, Boolean, Column, DateTime, ForeignKey, Integer, String, func
from sqlalchemy import (
JSON,
Boolean,
Column,
DateTime,
ForeignKey,
Integer,
String,
func,
)
from sqlalchemy.orm import relationship
from services.db import Base

116
auth/validations.py Normal file
View File

@@ -0,0 +1,116 @@
import re
from datetime import datetime
from typing import Dict, List, Optional, Union
from pydantic import BaseModel, Field, field_validator
# RFC 5322 compliant email regex pattern
EMAIL_PATTERN = r"^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$"
class AuthInput(BaseModel):
"""Base model for authentication input validation"""
user_id: str = Field(description="Unique user identifier")
username: str = Field(min_length=2, max_length=50)
token: str = Field(min_length=32)
@field_validator("user_id")
@classmethod
def validate_user_id(cls, v: str) -> str:
if not v.strip():
raise ValueError("user_id cannot be empty")
return v
class UserRegistrationInput(BaseModel):
"""Validation model for user registration"""
email: str = Field(max_length=254) # Max email length per RFC 5321
password: str = Field(min_length=8, max_length=100)
name: str = Field(min_length=2, max_length=50)
@field_validator("email")
@classmethod
def validate_email(cls, v: str) -> str:
"""Validate email format"""
if not re.match(EMAIL_PATTERN, v):
raise ValueError("Invalid email format")
return v.lower()
@field_validator("password")
@classmethod
def validate_password_strength(cls, v: str) -> str:
"""Validate password meets security requirements"""
if not any(c.isupper() for c in v):
raise ValueError("Password must contain at least one uppercase letter")
if not any(c.islower() for c in v):
raise ValueError("Password must contain at least one lowercase letter")
if not any(c.isdigit() for c in v):
raise ValueError("Password must contain at least one number")
if not any(c in "!@#$%^&*()_+-=[]{}|;:,.<>?" for c in v):
raise ValueError("Password must contain at least one special character")
return v
class UserLoginInput(BaseModel):
"""Validation model for user login"""
email: str = Field(max_length=254)
password: str = Field(min_length=8, max_length=100)
@field_validator("email")
@classmethod
def validate_email(cls, v: str) -> str:
if not re.match(EMAIL_PATTERN, v):
raise ValueError("Invalid email format")
return v.lower()
class TokenPayload(BaseModel):
"""Validation model for JWT token payload"""
user_id: str
username: str
exp: datetime
iat: datetime
scopes: Optional[List[str]] = []
class OAuthInput(BaseModel):
"""Validation model for OAuth input"""
provider: str = Field(pattern="^(google|github|facebook)$")
code: str
redirect_uri: Optional[str] = None
@field_validator("provider")
@classmethod
def validate_provider(cls, v: str) -> str:
valid_providers = ["google", "github", "facebook"]
if v.lower() not in valid_providers:
raise ValueError(f"Provider must be one of: {', '.join(valid_providers)}")
return v.lower()
class AuthResponse(BaseModel):
"""Validation model for authentication responses"""
success: bool
token: Optional[str] = None
error: Optional[str] = None
user: Optional[Dict[str, Union[str, int, bool]]] = None
@field_validator("error")
@classmethod
def validate_error_if_not_success(cls, v: Optional[str], info) -> Optional[str]:
if not info.data.get("success") and not v:
raise ValueError("Error message required when success is False")
return v
@field_validator("token")
@classmethod
def validate_token_if_success(cls, v: Optional[str], info) -> Optional[str]:
if info.data.get("success") and not v:
raise ValueError("Token required when success is True")
return v

345
cache/cache.py vendored
View File

@@ -1,7 +1,37 @@
"""
Caching system for the Discours platform
----------------------------------------
This module provides a comprehensive caching solution with these key components:
1. KEY NAMING CONVENTIONS:
- Entity-based keys: "entity:property:value" (e.g., "author:id:123")
- Collection keys: "entity:collection:params" (e.g., "authors:stats:limit=10:offset=0")
- Special case keys: Maintained for backwards compatibility (e.g., "topic_shouts_123")
2. CORE FUNCTIONS:
- cached_query(): High-level function for retrieving cached data or executing queries
3. ENTITY-SPECIFIC FUNCTIONS:
- cache_author(), cache_topic(): Cache entity data
- get_cached_author(), get_cached_topic(): Retrieve entity data from cache
- invalidate_cache_by_prefix(): Invalidate all keys with a specific prefix
4. CACHE INVALIDATION STRATEGY:
- Direct invalidation via invalidate_* functions for immediate changes
- Delayed invalidation via revalidation_manager for background processing
- Event-based triggers for automatic cache updates (see triggers.py)
To maintain consistency with the existing codebase, this module preserves
the original key naming patterns while providing a more structured approach
for new cache operations.
"""
import asyncio
import json
from typing import List
from typing import Any, Dict, List, Optional, Union
import orjson
from sqlalchemy import and_, join, select
from orm.author import Author, AuthorFollower
@@ -19,8 +49,10 @@ DEFAULT_FOLLOWS = {
"communities": [{"id": 1, "name": "Дискурс", "slug": "discours", "pic": ""}],
}
CACHE_TTL = 300 # 5 минут
CACHE_TTL = 300 # 5 minutes
# Key templates for common entity types
# These are used throughout the codebase and should be maintained for compatibility
CACHE_KEYS = {
"TOPIC_ID": "topic:id:{}",
"TOPIC_SLUG": "topic:slug:{}",
@@ -37,8 +69,8 @@ CACHE_KEYS = {
async def cache_topic(topic: dict):
payload = json.dumps(topic, cls=CustomJSONEncoder)
await asyncio.gather(
redis_operation("SET", f"topic:id:{topic['id']}", payload),
redis_operation("SET", f"topic:slug:{topic['slug']}", payload),
redis.execute("SET", f"topic:id:{topic['id']}", payload),
redis.execute("SET", f"topic:slug:{topic['slug']}", payload),
)
@@ -46,30 +78,30 @@ async def cache_topic(topic: dict):
async def cache_author(author: dict):
payload = json.dumps(author, cls=CustomJSONEncoder)
await asyncio.gather(
redis_operation("SET", f"author:user:{author['user'].strip()}", str(author["id"])),
redis_operation("SET", f"author:id:{author['id']}", payload),
redis.execute("SET", f"author:user:{author['user'].strip()}", str(author["id"])),
redis.execute("SET", f"author:id:{author['id']}", payload),
)
# Cache follows data
async def cache_follows(follower_id: int, entity_type: str, entity_id: int, is_insert=True):
key = f"author:follows-{entity_type}s:{follower_id}"
follows_str = await redis_operation("GET", key)
follows = json.loads(follows_str) if follows_str else DEFAULT_FOLLOWS[entity_type]
follows_str = await redis.execute("GET", key)
follows = orjson.loads(follows_str) if follows_str else DEFAULT_FOLLOWS[entity_type]
if is_insert:
if entity_id not in follows:
follows.append(entity_id)
else:
follows = [eid for eid in follows if eid != entity_id]
await redis_operation("SET", key, json.dumps(follows, cls=CustomJSONEncoder))
await redis.execute("SET", key, json.dumps(follows, cls=CustomJSONEncoder))
await update_follower_stat(follower_id, entity_type, len(follows))
# Update follower statistics
async def update_follower_stat(follower_id, entity_type, count):
follower_key = f"author:id:{follower_id}"
follower_str = await redis_operation("GET", follower_key)
follower = json.loads(follower_str) if follower_str else None
follower_str = await redis.execute("GET", follower_key)
follower = orjson.loads(follower_str) if follower_str else None
if follower:
follower["stat"] = {f"{entity_type}s": count}
await cache_author(follower)
@@ -78,9 +110,9 @@ async def update_follower_stat(follower_id, entity_type, count):
# Get author from cache
async def get_cached_author(author_id: int, get_with_stat):
author_key = f"author:id:{author_id}"
result = await redis_operation("GET", author_key)
result = await redis.execute("GET", author_key)
if result:
return json.loads(result)
return orjson.loads(result)
# Load from database if not found in cache
q = select(Author).where(Author.id == author_id)
authors = get_with_stat(q)
@@ -103,16 +135,16 @@ async def get_cached_topic(topic_id: int):
dict: Topic data or None if not found.
"""
topic_key = f"topic:id:{topic_id}"
cached_topic = await redis_operation("GET", topic_key)
cached_topic = await redis.execute("GET", topic_key)
if cached_topic:
return json.loads(cached_topic)
return orjson.loads(cached_topic)
# If not in cache, fetch from the database
with local_session() as session:
topic = session.execute(select(Topic).where(Topic.id == topic_id)).scalar_one_or_none()
if topic:
topic_dict = topic.dict()
await redis_operation("SET", topic_key, json.dumps(topic_dict, cls=CustomJSONEncoder))
await redis.execute("SET", topic_key, json.dumps(topic_dict, cls=CustomJSONEncoder))
return topic_dict
return None
@@ -121,9 +153,9 @@ async def get_cached_topic(topic_id: int):
# Get topic by slug from cache
async def get_cached_topic_by_slug(slug: str, get_with_stat):
topic_key = f"topic:slug:{slug}"
result = await redis_operation("GET", topic_key)
result = await redis.execute("GET", topic_key)
if result:
return json.loads(result)
return orjson.loads(result)
# Load from database if not found in cache
topic_query = select(Topic).where(Topic.slug == slug)
topics = get_with_stat(topic_query)
@@ -138,8 +170,8 @@ async def get_cached_topic_by_slug(slug: str, get_with_stat):
async def get_cached_authors_by_ids(author_ids: List[int]) -> List[dict]:
# Fetch all author data concurrently
keys = [f"author:id:{author_id}" for author_id in author_ids]
results = await asyncio.gather(*(redis_operation("GET", key) for key in keys))
authors = [json.loads(result) if result else None for result in results]
results = await asyncio.gather(*(redis.execute("GET", key) for key in keys))
authors = [orjson.loads(result) if result else None for result in results]
# Load missing authors from database and cache
missing_indices = [index for index, author in enumerate(authors) if author is None]
if missing_indices:
@@ -156,47 +188,47 @@ async def get_cached_authors_by_ids(author_ids: List[int]) -> List[dict]:
async def get_cached_topic_followers(topic_id: int):
"""
Получает подписчиков темы по ID, используя кеш Redis.
Если данные отсутствуют в кеше, извлекает из базы данных и кеширует их.
:param topic_id: Идентификатор темы, подписчиков которой необходимо получить.
:return: Список подписчиков темы, каждый элемент представляет собой словарь с ID и именем автора.
Args:
topic_id: ID темы
Returns:
List[dict]: Список подписчиков с их данными
"""
try:
# Попытка получить данные из кеша
cached = await redis_operation("GET", f"topic:followers:{topic_id}")
if cached:
followers_ids = json.loads(cached)
logger.debug(f"Cached {len(followers_ids)} followers for topic #{topic_id}")
followers = await get_cached_authors_by_ids(followers_ids)
return followers
cache_key = CACHE_KEYS["TOPIC_FOLLOWERS"].format(topic_id)
cached = await redis.execute("GET", cache_key)
# Если данные не найдены в кеше, загрузка из базы данных
async with local_session() as session:
result = await session.execute(
session.query(Author.id)
if cached:
followers_ids = orjson.loads(cached)
logger.debug(f"Found {len(followers_ids)} cached followers for topic #{topic_id}")
return await get_cached_authors_by_ids(followers_ids)
with local_session() as session:
followers_ids = [
f[0]
for f in session.query(Author.id)
.join(TopicFollower, TopicFollower.follower == Author.id)
.filter(TopicFollower.topic == topic_id)
)
followers_ids = [f[0] for f in result.scalars().all()]
.all()
]
# Кеширование результатов
await redis_operation("SET", f"topic:followers:{topic_id}", json.dumps(followers_ids))
# Получение подробной информации о подписчиках по их ID
await redis.execute("SETEX", cache_key, CACHE_TTL, orjson.dumps(followers_ids))
followers = await get_cached_authors_by_ids(followers_ids)
logger.debug(followers)
logger.debug(f"Cached {len(followers)} followers for topic #{topic_id}")
return followers
except Exception as e:
logger.error(f"Ошибка при получении подписчиков для темы #{topic_id}: {str(e)}")
logger.error(f"Error getting followers for topic #{topic_id}: {str(e)}")
return []
# Get cached author followers
async def get_cached_author_followers(author_id: int):
# Check cache for data
cached = await redis_operation("GET", f"author:followers:{author_id}")
cached = await redis.execute("GET", f"author:followers:{author_id}")
if cached:
followers_ids = json.loads(cached)
followers_ids = orjson.loads(cached)
followers = await get_cached_authors_by_ids(followers_ids)
logger.debug(f"Cached followers for author #{author_id}: {len(followers)}")
return followers
@@ -210,7 +242,7 @@ async def get_cached_author_followers(author_id: int):
.filter(AuthorFollower.author == author_id, Author.id != author_id)
.all()
]
await redis_operation("SET", f"author:followers:{author_id}", json.dumps(followers_ids))
await redis.execute("SET", f"author:followers:{author_id}", orjson.dumps(followers_ids))
followers = await get_cached_authors_by_ids(followers_ids)
return followers
@@ -218,9 +250,9 @@ async def get_cached_author_followers(author_id: int):
# Get cached follower authors
async def get_cached_follower_authors(author_id: int):
# Attempt to retrieve authors from cache
cached = await redis_operation("GET", f"author:follows-authors:{author_id}")
cached = await redis.execute("GET", f"author:follows-authors:{author_id}")
if cached:
authors_ids = json.loads(cached)
authors_ids = orjson.loads(cached)
else:
# Query authors from database
with local_session() as session:
@@ -232,7 +264,7 @@ async def get_cached_follower_authors(author_id: int):
.where(AuthorFollower.follower == author_id)
).all()
]
await redis_operation("SET", f"author:follows-authors:{author_id}", json.dumps(authors_ids))
await redis.execute("SET", f"author:follows-authors:{author_id}", orjson.dumps(authors_ids))
authors = await get_cached_authors_by_ids(authors_ids)
return authors
@@ -241,9 +273,9 @@ async def get_cached_follower_authors(author_id: int):
# Get cached follower topics
async def get_cached_follower_topics(author_id: int):
# Attempt to retrieve topics from cache
cached = await redis_operation("GET", f"author:follows-topics:{author_id}")
cached = await redis.execute("GET", f"author:follows-topics:{author_id}")
if cached:
topics_ids = json.loads(cached)
topics_ids = orjson.loads(cached)
else:
# Load topics from database and cache them
with local_session() as session:
@@ -254,13 +286,13 @@ async def get_cached_follower_topics(author_id: int):
.where(TopicFollower.follower == author_id)
.all()
]
await redis_operation("SET", f"author:follows-topics:{author_id}", json.dumps(topics_ids))
await redis.execute("SET", f"author:follows-topics:{author_id}", orjson.dumps(topics_ids))
topics = []
for topic_id in topics_ids:
topic_str = await redis_operation("GET", f"topic:id:{topic_id}")
topic_str = await redis.execute("GET", f"topic:id:{topic_id}")
if topic_str:
topic = json.loads(topic_str)
topic = orjson.loads(topic_str)
if topic and topic not in topics:
topics.append(topic)
@@ -280,12 +312,12 @@ async def get_cached_author_by_user_id(user_id: str, get_with_stat):
dict: Dictionary with author data or None if not found.
"""
# Attempt to find author ID by user_id in Redis cache
author_id = await redis_operation("GET", f"author:user:{user_id.strip()}")
author_id = await redis.execute("GET", f"author:user:{user_id.strip()}")
if author_id:
# If ID is found, get full author data by ID
author_data = await redis_operation("GET", f"author:id:{author_id}")
author_data = await redis.execute("GET", f"author:id:{author_id}")
if author_data:
return json.loads(author_data)
return orjson.loads(author_data)
# If data is not found in cache, query the database
author_query = select(Author).where(Author.user == user_id)
@@ -295,8 +327,8 @@ async def get_cached_author_by_user_id(user_id: str, get_with_stat):
author = authors[0]
author_dict = author.dict()
await asyncio.gather(
redis_operation("SET", f"author:user:{user_id.strip()}", str(author.id)),
redis_operation("SET", f"author:id:{author.id}", json.dumps(author_dict)),
redis.execute("SET", f"author:user:{user_id.strip()}", str(author.id)),
redis.execute("SET", f"author:id:{author.id}", orjson.dumps(author_dict)),
)
return author_dict
@@ -317,9 +349,9 @@ async def get_cached_topic_authors(topic_id: int):
"""
# Attempt to get a list of author IDs from cache
rkey = f"topic:authors:{topic_id}"
cached_authors_ids = await redis_operation("GET", rkey)
cached_authors_ids = await redis.execute("GET", rkey)
if cached_authors_ids:
authors_ids = json.loads(cached_authors_ids)
authors_ids = orjson.loads(cached_authors_ids)
else:
# If cache is empty, get data from the database
with local_session() as session:
@@ -331,7 +363,7 @@ async def get_cached_topic_authors(topic_id: int):
)
authors_ids = [author_id for (author_id,) in session.execute(query).all()]
# Cache the retrieved author IDs
await redis_operation("SET", rkey, json.dumps(authors_ids))
await redis.execute("SET", rkey, orjson.dumps(authors_ids))
# Retrieve full author details from cached IDs
if authors_ids:
@@ -352,11 +384,11 @@ async def invalidate_shouts_cache(cache_keys: List[str]):
cache_key = f"shouts:{key}"
# Удаляем основной кэш
await redis_operation("DEL", cache_key)
await redis.execute("DEL", cache_key)
logger.debug(f"Invalidated cache key: {cache_key}")
# Добавляем ключ в список инвалидированных с TTL
await redis_operation("SETEX", f"{cache_key}:invalidated", value="1", ttl=CACHE_TTL)
await redis.execute("SETEX", f"{cache_key}:invalidated", CACHE_TTL, "1")
# Если это кэш темы, инвалидируем также связанные ключи
if key.startswith("topic_"):
@@ -368,7 +400,7 @@ async def invalidate_shouts_cache(cache_keys: List[str]):
f"topic:stats:{topic_id}",
]
for related_key in related_keys:
await redis_operation("DEL", related_key)
await redis.execute("DEL", related_key)
logger.debug(f"Invalidated related key: {related_key}")
except Exception as e:
@@ -379,15 +411,15 @@ async def cache_topic_shouts(topic_id: int, shouts: List[dict]):
"""Кэширует список публикаций для темы"""
key = f"topic_shouts_{topic_id}"
payload = json.dumps(shouts, cls=CustomJSONEncoder)
await redis_operation("SETEX", key, value=payload, ttl=CACHE_TTL)
await redis.execute("SETEX", key, CACHE_TTL, payload)
async def get_cached_topic_shouts(topic_id: int) -> List[dict]:
"""Получает кэшированный список публикаций для темы"""
key = f"topic_shouts_{topic_id}"
cached = await redis_operation("GET", key)
cached = await redis.execute("GET", key)
if cached:
return json.loads(cached)
return orjson.loads(cached)
return None
@@ -405,57 +437,33 @@ async def cache_related_entities(shout: Shout):
async def invalidate_shout_related_cache(shout: Shout, author_id: int):
"""
Инвалидирует весь кэш, связанный с публикацией
Инвалидирует весь кэш, связанный с публикацией и её связями
Args:
shout: Объект публикации
author_id: ID автора
"""
# Базовые ленты
cache_keys = [
cache_keys = {
"feed", # основная лента
f"author_{author_id}", # публикации автора
"random_top", # случайные топовые
"unrated", # неоцененные
"recent", # последние публикации
"coauthored", # совместные публикации
f"authored_{author_id}", # авторские публикации
f"followed_{author_id}", # подписки автора
]
"recent", # последние
"coauthored", # совместные
}
# Добавляем ключи для всех авторов публикации
for author in shout.authors:
cache_keys.extend(
[f"author_{author.id}", f"authored_{author.id}", f"followed_{author.id}", f"coauthored_{author.id}"]
)
# Добавляем ключи авторов
cache_keys.update(f"author_{a.id}" for a in shout.authors)
cache_keys.update(f"authored_{a.id}" for a in shout.authors)
# Добавляем ключи для тем
for topic in shout.topics:
cache_keys.extend(
[f"topic_{topic.id}", f"topic_shouts_{topic.id}", f"topic_recent_{topic.id}", f"topic_top_{topic.id}"]
)
# Добавляем ключи тем
cache_keys.update(f"topic_{t.id}" for t in shout.topics)
cache_keys.update(f"topic_shouts_{t.id}" for t in shout.topics)
# Инвалидируем все ключи
await invalidate_shouts_cache(cache_keys)
await invalidate_shouts_cache(list(cache_keys))
async def redis_operation(operation: str, key: str, value=None, ttl=None):
"""
Унифицированная функция для работы с Redis
Args:
operation: 'GET', 'SET', 'DEL', 'SETEX'
key: ключ
value: значение (для SET/SETEX)
ttl: время жизни в секундах (для SETEX)
"""
try:
if operation == "GET":
return await redis.execute("GET", key)
elif operation == "SET":
await redis.execute("SET", key, value)
elif operation == "SETEX":
await redis.execute("SETEX", key, ttl or CACHE_TTL, value)
elif operation == "DEL":
await redis.execute("DEL", key)
except Exception as e:
logger.error(f"Redis {operation} error for key {key}: {e}")
# Function removed - direct Redis calls used throughout the module instead
async def get_cached_entity(entity_type: str, entity_id: int, get_method, cache_method):
@@ -469,9 +477,9 @@ async def get_cached_entity(entity_type: str, entity_id: int, get_method, cache_
cache_method: метод кэширования
"""
key = f"{entity_type}:id:{entity_id}"
cached = await redis_operation("GET", key)
cached = await redis.execute("GET", key)
if cached:
return json.loads(cached)
return orjson.loads(cached)
entity = await get_method(entity_id)
if entity:
@@ -500,3 +508,120 @@ async def cache_by_id(entity, entity_id: int, cache_method):
d = x.dict()
await cache_method(d)
return d
# Универсальная функция для сохранения данных в кеш
async def cache_data(key: str, data: Any, ttl: Optional[int] = None) -> None:
"""
Сохраняет данные в кеш по указанному ключу.
Args:
key: Ключ кеша
data: Данные для сохранения
ttl: Время жизни кеша в секундах (None - бессрочно)
"""
try:
payload = json.dumps(data, cls=CustomJSONEncoder)
if ttl:
await redis.execute("SETEX", key, ttl, payload)
else:
await redis.execute("SET", key, payload)
logger.debug(f"Данные сохранены в кеш по ключу {key}")
except Exception as e:
logger.error(f"Ошибка при сохранении данных в кеш: {e}")
# Универсальная функция для получения данных из кеша
async def get_cached_data(key: str) -> Optional[Any]:
"""
Получает данные из кеша по указанному ключу.
Args:
key: Ключ кеша
Returns:
Any: Данные из кеша или None, если данных нет
"""
try:
cached_data = await redis.execute("GET", key)
if cached_data:
logger.debug(f"Данные получены из кеша по ключу {key}")
return orjson.loads(cached_data)
return None
except Exception as e:
logger.error(f"Ошибка при получении данных из кеша: {e}")
return None
# Универсальная функция для инвалидации кеша по префиксу
async def invalidate_cache_by_prefix(prefix: str) -> None:
"""
Инвалидирует все ключи кеша с указанным префиксом.
Args:
prefix: Префикс ключей кеша для инвалидации
"""
try:
keys = await redis.execute("KEYS", f"{prefix}:*")
if keys:
await redis.execute("DEL", *keys)
logger.debug(f"Удалено {len(keys)} ключей кеша с префиксом {prefix}")
except Exception as e:
logger.error(f"Ошибка при инвалидации кеша: {e}")
# Универсальная функция для получения и кеширования данных
async def cached_query(
cache_key: str,
query_func: callable,
ttl: Optional[int] = None,
force_refresh: bool = False,
use_key_format: bool = True,
**query_params,
) -> Any:
"""
Gets data from cache or executes query and saves result to cache.
Supports existing key formats for compatibility.
Args:
cache_key: Cache key or key template from CACHE_KEYS
query_func: Function to execute the query
ttl: Cache TTL in seconds (None - indefinite)
force_refresh: Force cache refresh
use_key_format: Whether to check if cache_key matches a key template in CACHE_KEYS
**query_params: Parameters to pass to the query function
Returns:
Any: Data from cache or query result
"""
# Check if cache_key matches a pattern in CACHE_KEYS
actual_key = cache_key
if use_key_format and "{}" in cache_key:
# Look for a template match in CACHE_KEYS
for key_name, key_format in CACHE_KEYS.items():
if cache_key == key_format:
# We have a match, now look for the id or value to format with
for param_name, param_value in query_params.items():
if param_name in ["id", "slug", "user", "topic_id", "author_id"]:
actual_key = cache_key.format(param_value)
break
# If not forcing refresh, try to get data from cache
if not force_refresh:
cached_result = await get_cached_data(actual_key)
if cached_result is not None:
return cached_result
# If data not in cache or refresh required, execute query
try:
result = await query_func(**query_params)
if result is not None:
# Save result to cache
await cache_data(actual_key, result, ttl)
return result
except Exception as e:
logger.error(f"Error executing query for caching: {e}")
# In case of error, return data from cache if not forcing refresh
if not force_refresh:
return await get_cached_data(actual_key)
raise

11
cache/memorycache.py vendored
View File

@@ -1,11 +0,0 @@
from dogpile.cache import make_region
from settings import REDIS_URL
# Создание региона кэша с TTL
cache_region = make_region()
cache_region.configure(
"dogpile.cache.redis",
arguments={"url": f"{REDIS_URL}/1"},
expiration_time=3600, # Cache expiration time in seconds
)

6
cache/precache.py vendored
View File

@@ -86,11 +86,15 @@ async def precache_data():
# Преобразуем словарь в список аргументов для HSET
if value:
# Если значение - словарь, преобразуем его в плоский список для HSET
if isinstance(value, dict):
flattened = []
for field, val in value.items():
flattened.extend([field, val])
await redis.execute("HSET", key, *flattened)
else:
# Предполагаем, что значение уже содержит список
await redis.execute("HSET", key, *value)
logger.info(f"redis hash '{key}' was restored")
with local_session() as session:

100
cache/revalidator.py vendored
View File

@@ -1,17 +1,26 @@
import asyncio
from cache.cache import cache_author, cache_topic, get_cached_author, get_cached_topic
from cache.cache import (
cache_author,
cache_topic,
get_cached_author,
get_cached_topic,
invalidate_cache_by_prefix,
)
from resolvers.stat import get_with_stat
from utils.logger import root_logger as logger
CACHE_REVALIDATION_INTERVAL = 300 # 5 minutes
class CacheRevalidationManager:
def __init__(self, interval=60):
def __init__(self, interval=CACHE_REVALIDATION_INTERVAL):
"""Инициализация менеджера с заданным интервалом проверки (в секундах)."""
self.interval = interval
self.items_to_revalidate = {"authors": set(), "topics": set(), "shouts": set(), "reactions": set()}
self.lock = asyncio.Lock()
self.running = True
self.MAX_BATCH_SIZE = 10 # Максимальное количество элементов для поштучной обработки
async def start(self):
"""Запуск фонового воркера для ревалидации кэша."""
@@ -32,23 +41,108 @@ class CacheRevalidationManager:
"""Обновление кэша для всех сущностей, требующих ревалидации."""
async with self.lock:
# Ревалидация кэша авторов
if self.items_to_revalidate["authors"]:
logger.debug(f"Revalidating {len(self.items_to_revalidate['authors'])} authors")
for author_id in self.items_to_revalidate["authors"]:
if author_id == "all":
await invalidate_cache_by_prefix("authors")
break
author = await get_cached_author(author_id, get_with_stat)
if author:
await cache_author(author)
self.items_to_revalidate["authors"].clear()
# Ревалидация кэша тем
if self.items_to_revalidate["topics"]:
logger.debug(f"Revalidating {len(self.items_to_revalidate['topics'])} topics")
for topic_id in self.items_to_revalidate["topics"]:
if topic_id == "all":
await invalidate_cache_by_prefix("topics")
break
topic = await get_cached_topic(topic_id)
if topic:
await cache_topic(topic)
self.items_to_revalidate["topics"].clear()
# Ревалидация шаутов (публикаций)
if self.items_to_revalidate["shouts"]:
shouts_count = len(self.items_to_revalidate["shouts"])
logger.debug(f"Revalidating {shouts_count} shouts")
# Проверяем наличие специального флага 'all'
if "all" in self.items_to_revalidate["shouts"]:
await invalidate_cache_by_prefix("shouts")
# Если элементов много, но не 'all', используем специфический подход
elif shouts_count > self.MAX_BATCH_SIZE:
# Инвалидируем только collections keys, которые затрагивают много сущностей
collection_keys = await asyncio.create_task(self._redis.execute("KEYS", "shouts:*"))
if collection_keys:
await self._redis.execute("DEL", *collection_keys)
logger.debug(f"Удалено {len(collection_keys)} коллекционных ключей шаутов")
# Обновляем кеш каждого конкретного шаута
for shout_id in self.items_to_revalidate["shouts"]:
if shout_id != "all":
# Точечная инвалидация для каждого shout_id
specific_keys = [f"shout:id:{shout_id}"]
for key in specific_keys:
await self._redis.execute("DEL", key)
logger.debug(f"Удален ключ кеша {key}")
else:
# Если элементов немного, обрабатываем каждый
for shout_id in self.items_to_revalidate["shouts"]:
if shout_id != "all":
# Точечная инвалидация для каждого shout_id
specific_keys = [f"shout:id:{shout_id}"]
for key in specific_keys:
await self._redis.execute("DEL", key)
logger.debug(f"Удален ключ кеша {key}")
self.items_to_revalidate["shouts"].clear()
# Аналогично для реакций - точечная инвалидация
if self.items_to_revalidate["reactions"]:
reactions_count = len(self.items_to_revalidate["reactions"])
logger.debug(f"Revalidating {reactions_count} reactions")
if "all" in self.items_to_revalidate["reactions"]:
await invalidate_cache_by_prefix("reactions")
elif reactions_count > self.MAX_BATCH_SIZE:
# Инвалидируем только collections keys для реакций
collection_keys = await asyncio.create_task(self._redis.execute("KEYS", "reactions:*"))
if collection_keys:
await self._redis.execute("DEL", *collection_keys)
logger.debug(f"Удалено {len(collection_keys)} коллекционных ключей реакций")
# Точечная инвалидация для каждой реакции
for reaction_id in self.items_to_revalidate["reactions"]:
if reaction_id != "all":
specific_keys = [f"reaction:id:{reaction_id}"]
for key in specific_keys:
await self._redis.execute("DEL", key)
logger.debug(f"Удален ключ кеша {key}")
else:
# Точечная инвалидация для каждой реакции
for reaction_id in self.items_to_revalidate["reactions"]:
if reaction_id != "all":
specific_keys = [f"reaction:id:{reaction_id}"]
for key in specific_keys:
await self._redis.execute("DEL", key)
logger.debug(f"Удален ключ кеша {key}")
self.items_to_revalidate["reactions"].clear()
def mark_for_revalidation(self, entity_id, entity_type):
"""Отметить сущность для ревалидации."""
if entity_id and entity_type:
self.items_to_revalidate[entity_type].add(entity_id)
def invalidate_all(self, entity_type):
"""Пометить для инвалидации все элементы указанного типа."""
logger.debug(f"Marking all {entity_type} for invalidation")
# Особый флаг для полной инвалидации
self.items_to_revalidate[entity_type].add("all")
async def stop(self):
"""Остановка фонового воркера."""
self.running = False
@@ -60,4 +154,4 @@ class CacheRevalidationManager:
pass
revalidation_manager = CacheRevalidationManager(interval=300) # Ревалидация каждые 5 минут
revalidation_manager = CacheRevalidationManager()

66
cache/triggers.py vendored
View File

@@ -2,9 +2,10 @@ from sqlalchemy import event
from cache.revalidator import revalidation_manager
from orm.author import Author, AuthorFollower
from orm.reaction import Reaction
from orm.reaction import Reaction, ReactionKind
from orm.shout import Shout, ShoutAuthor, ShoutReactionsFollower
from orm.topic import Topic, TopicFollower
from services.db import local_session
from utils.logger import root_logger as logger
@@ -43,6 +44,62 @@ def after_follower_handler(mapper, connection, target, is_delete=False):
revalidation_manager.mark_for_revalidation(target.follower, "authors")
def after_shout_handler(mapper, connection, target):
"""Обработчик изменения статуса публикации"""
if not isinstance(target, Shout):
return
# Проверяем изменение статуса публикации
# was_published = target.published_at is not None and target.deleted_at is None
# Всегда обновляем счетчики для авторов и тем при любом изменении поста
for author in target.authors:
revalidation_manager.mark_for_revalidation(author.id, "authors")
for topic in target.topics:
revalidation_manager.mark_for_revalidation(topic.id, "topics")
# Обновляем сам пост
revalidation_manager.mark_for_revalidation(target.id, "shouts")
def after_reaction_handler(mapper, connection, target):
"""Обработчик для комментариев"""
if not isinstance(target, Reaction):
return
# Проверяем что это комментарий
is_comment = target.kind == ReactionKind.COMMENT.value
# Получаем связанный пост
shout_id = target.shout if isinstance(target.shout, int) else target.shout.id
if not shout_id:
return
# Обновляем счетчики для автора комментария
if target.created_by:
revalidation_manager.mark_for_revalidation(target.created_by, "authors")
# Обновляем счетчики для поста
revalidation_manager.mark_for_revalidation(shout_id, "shouts")
if is_comment:
# Для комментариев обновляем также авторов и темы
with local_session() as session:
shout = (
session.query(Shout)
.filter(Shout.id == shout_id, Shout.published_at.is_not(None), Shout.deleted_at.is_(None))
.first()
)
if shout:
for author in shout.authors:
revalidation_manager.mark_for_revalidation(author.id, "authors")
for topic in shout.topics:
revalidation_manager.mark_for_revalidation(topic.id, "topics")
def events_register():
"""Регистрация обработчиков событий для всех сущностей."""
event.listen(ShoutAuthor, "after_insert", mark_for_revalidation)
@@ -64,6 +121,11 @@ def events_register():
event.listen(Reaction, "after_update", mark_for_revalidation)
event.listen(Author, "after_update", mark_for_revalidation)
event.listen(Topic, "after_update", mark_for_revalidation)
event.listen(Shout, "after_update", mark_for_revalidation)
event.listen(Shout, "after_update", after_shout_handler)
event.listen(Shout, "after_delete", after_shout_handler)
event.listen(Reaction, "after_insert", after_reaction_handler)
event.listen(Reaction, "after_update", after_reaction_handler)
event.listen(Reaction, "after_delete", after_reaction_handler)
logger.info("Event handlers registered successfully.")

279
docs/caching.md Normal file
View File

@@ -0,0 +1,279 @@
# Система кеширования Discours
## Общее описание
Система кеширования Discours - это комплексное решение для повышения производительности платформы. Она использует Redis для хранения часто запрашиваемых данных и уменьшения нагрузки на основную базу данных.
Кеширование реализовано как многоуровневая система, состоящая из нескольких модулей:
- `cache.py` - основной модуль с функциями кеширования
- `revalidator.py` - асинхронный менеджер ревалидации кеша
- `triggers.py` - триггеры событий SQLAlchemy для автоматической ревалидации
- `precache.py` - предварительное кеширование данных при старте приложения
## Ключевые компоненты
### 1. Форматы ключей кеша
Система поддерживает несколько форматов ключей для обеспечения совместимости и удобства использования:
- **Ключи сущностей**: `entity:property:value` (например, `author:id:123`)
- **Ключи коллекций**: `entity:collection:params` (например, `authors:stats:limit=10:offset=0`)
- **Специальные ключи**: для обратной совместимости (например, `topic_shouts_123`)
Все стандартные форматы ключей хранятся в словаре `CACHE_KEYS`:
```python
CACHE_KEYS = {
"TOPIC_ID": "topic:id:{}",
"TOPIC_SLUG": "topic:slug:{}",
"AUTHOR_ID": "author:id:{}",
# и другие...
}
```
### 2. Основные функции кеширования
#### Структура ключей
Вместо генерации ключей через вспомогательные функции, система следует строгим конвенциям формирования ключей:
1. **Ключи для отдельных сущностей** строятся по шаблону:
```
entity:property:value
```
Например:
- `topic:id:123` - тема с ID 123
- `author:slug:john-doe` - автор со слагом "john-doe"
- `shout:id:456` - публикация с ID 456
2. **Ключи для коллекций** строятся по шаблону:
```
entity:collection[:filter1=value1:filter2=value2:...]
```
Например:
- `topics:all:basic` - базовый список всех тем
- `authors:stats:limit=10:offset=0:sort=name` - отсортированный список авторов с пагинацией
- `shouts:feed:limit=20:community=1` - лента публикаций с фильтром по сообществу
3. **Специальные форматы ключей** для обратной совместимости:
```
entity_action_id
```
Например:
- `topic_shouts_123` - публикации для темы с ID 123
Во всех модулях системы разработчики должны явно формировать ключи в соответствии с этими конвенциями, что обеспечивает единообразие и предсказуемость кеширования.
#### Работа с данными в кеше
```python
async def cache_data(key, data, ttl=None)
async def get_cached_data(key)
```
Эти функции предоставляют универсальный интерфейс для сохранения и получения данных из кеша. Они напрямую используют Redis через вызовы `redis.execute()`.
#### Высокоуровневое кеширование запросов
```python
async def cached_query(cache_key, query_func, ttl=None, force_refresh=False, **query_params)
```
Функция `cached_query` объединяет получение данных из кеша и выполнение запроса в случае отсутствия данных в кеше. Это основная функция, которую следует использовать в резолверах для кеширования результатов запросов.
### 3. Кеширование сущностей
Для основных типов сущностей реализованы специальные функции:
```python
async def cache_topic(topic: dict)
async def cache_author(author: dict)
async def get_cached_topic(topic_id: int)
async def get_cached_author(author_id: int, get_with_stat)
```
Эти функции упрощают работу с часто используемыми типами данных и обеспечивают единообразный подход к их кешированию.
### 4. Работа со связями
Для работы со связями между сущностями предназначены функции:
```python
async def cache_follows(follower_id, entity_type, entity_id, is_insert=True)
async def get_cached_topic_followers(topic_id)
async def get_cached_author_followers(author_id)
async def get_cached_follower_topics(author_id)
```
Они позволяют эффективно кешировать и получать информацию о подписках, связях между авторами, темами и публикациями.
## Система инвалидации кеша
### 1. Прямая инвалидация
Система поддерживает два типа инвалидации кеша:
#### 1.1. Инвалидация по префиксу
```python
async def invalidate_cache_by_prefix(prefix)
```
Позволяет инвалидировать все ключи кеша, начинающиеся с указанного префикса. Используется в резолверах для инвалидации группы кешей при массовых изменениях.
#### 1.2. Точечная инвалидация
```python
async def invalidate_authors_cache(author_id=None)
async def invalidate_topics_cache(topic_id=None)
```
Эти функции позволяют инвалидировать кеш только для конкретной сущности, что снижает нагрузку на Redis и предотвращает ненужную потерю кешированных данных. Если ID сущности не указан, используется инвалидация по префиксу.
Примеры использования точечной инвалидации:
```python
# Инвалидация кеша только для автора с ID 123
await invalidate_authors_cache(123)
# Инвалидация кеша только для темы с ID 456
await invalidate_topics_cache(456)
```
### 2. Отложенная инвалидация
Модуль `revalidator.py` реализует систему отложенной инвалидации кеша через класс `CacheRevalidationManager`:
```python
class CacheRevalidationManager:
# ...
async def process_revalidation(self):
# ...
def mark_for_revalidation(self, entity_id, entity_type):
# ...
```
Менеджер ревалидации работает как асинхронный фоновый процесс, который периодически (по умолчанию каждые 5 минут) проверяет наличие сущностей для ревалидации.
Особенности реализации:
- Для авторов и тем используется поштучная ревалидация каждой записи
- Для шаутов и реакций используется батчевая обработка, с порогом в 10 элементов
- При достижении порога система переключается на инвалидацию коллекций вместо поштучной обработки
- Специальный флаг `all` позволяет запустить полную инвалидацию всех записей типа
### 3. Автоматическая инвалидация через триггеры
Модуль `triggers.py` регистрирует обработчики событий SQLAlchemy, которые автоматически отмечают сущности для ревалидации при изменении данных в базе:
```python
def events_register():
event.listen(Author, "after_update", mark_for_revalidation)
event.listen(Topic, "after_update", mark_for_revalidation)
# и другие...
```
Триггеры имеют следующие особенности:
- Реагируют на события вставки, обновления и удаления
- Отмечают затронутые сущности для отложенной ревалидации
- Учитывают связи между сущностями (например, при изменении темы обновляются связанные шауты)
## Предварительное кеширование
Модуль `precache.py` реализует предварительное кеширование часто используемых данных при старте приложения:
```python
async def precache_data():
# ...
```
Эта функция выполняется при запуске приложения и заполняет кеш данными, которые будут часто запрашиваться пользователями.
## Примеры использования
### Простое кеширование результата запроса
```python
async def get_topics_with_stats(limit=10, offset=0, by="title"):
# Формирование ключа кеша по конвенции
cache_key = f"topics:stats:limit={limit}:offset={offset}:sort={by}"
cached_data = await get_cached_data(cache_key)
if cached_data:
return cached_data
# Выполнение запроса к базе данных
result = ... # логика получения данных
await cache_data(cache_key, result, ttl=300)
return result
```
### Использование обобщенной функции cached_query
```python
async def get_topics_with_stats(limit=10, offset=0, by="title"):
async def fetch_data(limit, offset, by):
# Логика получения данных
return result
# Формирование ключа кеша по конвенции
cache_key = f"topics:stats:limit={limit}:offset={offset}:sort={by}"
return await cached_query(
cache_key,
fetch_data,
ttl=300,
limit=limit,
offset=offset,
by=by
)
```
### Точечная инвалидация кеша при изменении данных
```python
async def update_topic(topic_id, new_data):
# Обновление данных в базе
# ...
# Точечная инвалидация кеша только для измененной темы
await invalidate_topics_cache(topic_id)
return updated_topic
```
## Отладка и мониторинг
Система кеширования использует логгер для отслеживания операций:
```python
logger.debug(f"Данные получены из кеша по ключу {key}")
logger.debug(f"Удалено {len(keys)} ключей кеша с префиксом {prefix}")
logger.error(f"Ошибка при инвалидации кеша: {e}")
```
Это позволяет отслеживать работу кеша и выявлять возможные проблемы на ранних стадиях.
## Рекомендации по использованию
1. **Следуйте конвенциям формирования ключей** - это критически важно для консистентности и предсказуемости кеша.
2. **Не создавайте собственные форматы ключей** - используйте существующие шаблоны для обеспечения единообразия.
3. **Не забывайте об инвалидации** - всегда инвалидируйте кеш при изменении данных.
4. **Используйте точечную инвалидацию** - вместо инвалидации по префиксу для снижения нагрузки на Redis.
5. **Устанавливайте разумные TTL** - используйте разные значения TTL в зависимости от частоты изменения данных.
6. **Не кешируйте большие объемы данных** - кешируйте только то, что действительно необходимо для повышения производительности.
## Технические детали реализации
- **Сериализация данных**: используется `orjson` для эффективной сериализации и десериализации данных.
- **Форматирование даты и времени**: для корректной работы с датами используется `CustomJSONEncoder`.
- **Асинхронность**: все операции кеширования выполняются асинхронно для минимального влияния на производительность API.
- **Прямое взаимодействие с Redis**: все операции выполняются через прямые вызовы `redis.execute()` с обработкой ошибок.
- **Батчевая обработка**: для массовых операций используется пороговое значение, после которого применяются оптимизированные стратегии.
## Известные ограничения
1. **Согласованность данных** - система не гарантирует абсолютную согласованность данных в кеше и базе данных.
2. **Память** - необходимо следить за объемом данных в кеше, чтобы избежать проблем с памятью Redis.
3. **Производительность Redis** - при большом количестве операций с кешем может стать узким местом.

165
docs/comments-pagination.md Normal file
View File

@@ -0,0 +1,165 @@
# Пагинация комментариев
## Обзор
Реализована система пагинации комментариев по веткам, которая позволяет эффективно загружать и отображать вложенные ветки обсуждений. Основные преимущества:
1. Загрузка только необходимых комментариев, а не всего дерева
2. Снижение нагрузки на сервер и клиент
3. Возможность эффективной навигации по большим обсуждениям
4. Предзагрузка первых N ответов для улучшения UX
## API для иерархической загрузки комментариев
### GraphQL запрос `load_comments_branch`
```graphql
query LoadCommentsBranch(
$shout: Int!,
$parentId: Int,
$limit: Int,
$offset: Int,
$sort: ReactionSort,
$childrenLimit: Int,
$childrenOffset: Int
) {
load_comments_branch(
shout: $shout,
parent_id: $parentId,
limit: $limit,
offset: $offset,
sort: $sort,
children_limit: $childrenLimit,
children_offset: $childrenOffset
) {
id
body
created_at
created_by {
id
name
slug
pic
}
kind
reply_to
stat {
rating
commented
}
first_replies {
id
body
created_at
created_by {
id
name
slug
pic
}
kind
reply_to
stat {
rating
commented
}
}
}
}
```
### Параметры запроса
| Параметр | Тип | По умолчанию | Описание |
|----------|-----|--------------|----------|
| shout | Int! | - | ID статьи, к которой относятся комментарии |
| parent_id | Int | null | ID родительского комментария. Если null, загружаются корневые комментарии |
| limit | Int | 10 | Максимальное количество комментариев для загрузки |
| offset | Int | 0 | Смещение для пагинации |
| sort | ReactionSort | newest | Порядок сортировки: newest, oldest, like |
| children_limit | Int | 3 | Максимальное количество дочерних комментариев для каждого родительского |
| children_offset | Int | 0 | Смещение для пагинации дочерних комментариев |
### Поля в ответе
Каждый комментарий содержит следующие основные поля:
- `id`: ID комментария
- `body`: Текст комментария
- `created_at`: Время создания
- `created_by`: Информация об авторе
- `kind`: Тип реакции (COMMENT)
- `reply_to`: ID родительского комментария (null для корневых)
- `first_replies`: Первые N дочерних комментариев
- `stat`: Статистика комментария, включающая:
- `commented`: Количество ответов на комментарий
- `rating`: Рейтинг комментария
## Примеры использования
### Загрузка корневых комментариев с первыми ответами
```javascript
const { data } = await client.query({
query: LOAD_COMMENTS_BRANCH,
variables: {
shout: 222,
limit: 10,
offset: 0,
sort: "newest",
childrenLimit: 3
}
});
```
### Загрузка ответов на конкретный комментарий
```javascript
const { data } = await client.query({
query: LOAD_COMMENTS_BRANCH,
variables: {
shout: 222,
parentId: 123, // ID комментария, для которого загружаем ответы
limit: 10,
offset: 0,
sort: "oldest" // Сортируем ответы от старых к новым
}
});
```
### Пагинация дочерних комментариев
Для загрузки дополнительных ответов на комментарий:
```javascript
const { data } = await client.query({
query: LOAD_COMMENTS_BRANCH,
variables: {
shout: 222,
parentId: 123,
limit: 10,
offset: 0,
childrenLimit: 5,
childrenOffset: 3 // Пропускаем первые 3 комментария (уже загруженные)
}
});
```
## Рекомендации по клиентской реализации
1. Для эффективной работы со сложными ветками обсуждений рекомендуется:
- Сначала загружать только корневые комментарии с первыми N ответами
- При наличии дополнительных ответов (когда `stat.commented > first_replies.length`)
добавить кнопку "Показать все ответы"
- При нажатии на кнопку загружать дополнительные ответы с помощью запроса с указанным `parentId`
2. Для сортировки:
- По умолчанию использовать `newest` для отображения свежих обсуждений
- Предусмотреть переключатель сортировки для всего дерева комментариев
- При изменении сортировки перезагружать данные с новым параметром `sort`
3. Для улучшения производительности:
- Кешировать результаты запросов на клиенте
- Использовать оптимистичные обновления при добавлении/редактировании комментариев
- При необходимости загружать комментарии порциями (ленивая загрузка)

View File

@@ -6,14 +6,20 @@
## Мультидоменная авторизация
- Поддержка авторизации для разных доменов:
- *.dscrs.site (включая testing.dscrs.site)
- localhost[:port]
- testingdiscoursio-git-*-discoursio.vercel.app
- *.discours.io
- Поддержка авторизации для разных доменов
- Автоматическое определение сервера авторизации
- Корректная обработка CORS для всех поддерживаемых доменов
## Система кеширования
- Redis используется в качестве основного механизма кеширования
- Поддержка как синхронных, так и асинхронных функций в декораторе cache_on_arguments
- Автоматическая сериализация/десериализация данных в JSON с использованием CustomJSONEncoder
- Резервная сериализация через pickle для сложных объектов
- Генерация уникальных ключей кеша на основе сигнатуры функции и переданных аргументов
- Настраиваемое время жизни кеша (TTL)
- Возможность ручной инвалидации кеша для конкретных функций и аргументов
## Webhooks
- Автоматическая регистрация вебхука для события user.login
@@ -25,11 +31,18 @@
## CORS Configuration
- Поддерживаются домены:
- *.dscrs.site (включая testing.dscrs.site, core.dscrs.site)
- *.discours.io (включая testing.discours.io)
- localhost (включая порты)
- Поддерживаемые методы: GET, POST, OPTIONS
- Настроена поддержка credentials
- Разрешенные заголовки: Authorization, Content-Type, X-Requested-With, DNT, Cache-Control
- Настроено кэширование preflight-ответов на 20 дней (1728000 секунд)
## Пагинация комментариев по веткам
- Эффективная загрузка комментариев с учетом их иерархической структуры
- Отдельный запрос `load_comments_branch` для оптимизированной загрузки ветки комментариев
- Возможность загрузки корневых комментариев статьи с первыми ответами на них
- Гибкая пагинация как для корневых, так и для дочерних комментариев
- Использование поля `stat.commented` для отображения количества ответов на комментарий
- Добавление специального поля `first_replies` для хранения первых ответов на комментарий
- Поддержка различных методов сортировки (новые, старые, популярные)
- Оптимизированные SQL запросы для минимизации нагрузки на базу данных

46
main.py
View File

@@ -14,20 +14,9 @@ from starlette.routing import Route
from cache.precache import precache_data
from cache.revalidator import revalidation_manager
from orm import (
# collection,
# invite,
author,
community,
notification,
reaction,
shout,
topic,
)
from services.db import create_table_if_not_exists, engine
from services.exception import ExceptionHandlerMiddleware
from services.redis import redis
from services.schema import resolvers
from services.schema import create_all_tables, resolvers
from services.search import search_service
from services.viewed import ViewedStorage
from services.webhook import WebhookEndpoint, create_webhook_endpoint
@@ -46,39 +35,10 @@ async def start():
print(f"[main] process started in {MODE} mode")
def create_all_tables():
for model in [
# user.User,
author.Author,
author.AuthorFollower,
community.Community,
community.CommunityFollower,
shout.Shout,
shout.ShoutAuthor,
author.AuthorBookmark,
topic.Topic,
topic.TopicFollower,
shout.ShoutTopic,
reaction.Reaction,
shout.ShoutReactionsFollower,
author.AuthorRating,
notification.Notification,
notification.NotificationSeen,
# collection.Collection, collection.ShoutCollection,
# invite.Invite
]:
create_table_if_not_exists(engine, model)
async def create_all_tables_async():
# Оборачиваем синхронную функцию в асинхронную
await asyncio.to_thread(create_all_tables)
async def lifespan(app):
async def lifespan(_app):
try:
create_all_tables()
await asyncio.gather(
create_all_tables_async(),
redis.connect(),
precache_data(),
ViewedStorage.init(),

View File

@@ -1,19 +1,10 @@
log_format custom '$remote_addr - $remote_user [$time_local] "$request" '
'origin=$http_origin allow_origin=$allow_origin status=$status '
'"$http_referer" "$http_user_agent"';
{{ $proxy_settings := "proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $http_connection; proxy_set_header Host $http_host; proxy_set_header X-Request-Start $msec;" }}
{{ $gzip_settings := "gzip on; gzip_min_length 1100; gzip_buffers 4 32k; gzip_types text/css text/javascript text/xml text/plain text/x-component application/javascript application/x-javascript application/json application/xml application/rss+xml font/truetype application/x-font-ttf font/opentype application/vnd.ms-fontobject image/svg+xml; gzip_vary on; gzip_comp_level 6;" }}
{{ $cors_headers_options := "if ($request_method = 'OPTIONS') { add_header 'Access-Control-Allow-Origin' '$allow_origin' always; add_header 'Access-Control-Allow-Methods' 'POST, GET, OPTIONS'; add_header 'Access-Control-Allow-Headers' 'Content-Type, Authorization' always; add_header 'Access-Control-Allow-Credentials' 'true' always; add_header 'Access-Control-Max-Age' 1728000; add_header 'Content-Type' 'text/plain; charset=utf-8'; add_header 'Content-Length' 0; return 204; }" }}
{{ $cors_headers_post := "if ($request_method = 'POST') { add_header 'Access-Control-Allow-Origin' '$allow_origin' always; add_header 'Access-Control-Allow-Methods' 'POST, GET, OPTIONS' always; add_header 'Access-Control-Allow-Headers' 'Content-Type, Authorization' always; add_header 'Access-Control-Allow-Credentials' 'true' always; }" }}
{{ $cors_headers_get := "if ($request_method = 'GET') { add_header 'Access-Control-Allow-Origin' '$allow_origin' always; add_header 'Access-Control-Allow-Methods' 'POST, GET, OPTIONS' always; add_header 'Access-Control-Allow-Headers' 'Content-Type, Authorization' always; add_header 'Access-Control-Allow-Credentials' 'true' always; }" }}
map $http_origin $allow_origin {
~^https?:\/\/(.*\.)?localhost(:\d+)?$ $http_origin;
~^https?:\/\/(.*\.)?discours\.io$ $http_origin;
~^https?:\/\/(.*\.)?dscrs\.site$ $http_origin;
~^https?:\/\/testing\.(discours\.io|dscrs\.site)$ $http_origin;
~^https?:\/\/discoursio-webapp(-.*)?\.vercel\.app$ $http_origin;
default "";
}
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=my_cache:10m max_size=1g
inactive=60m use_temp_path=off;
limit_conn_zone $binary_remote_addr zone=addr:10m;
@@ -30,7 +21,7 @@ server {
listen [::]:{{ $listen_port }};
listen {{ $listen_port }};
server_name {{ $.NOSSL_SERVER_NAME }};
access_log /var/log/nginx/{{ $.APP }}-access.log;
access_log /var/log/nginx/{{ $.APP }}-access.log custom;
error_log /var/log/nginx/{{ $.APP }}-error.log;
client_max_body_size 100M;
@@ -38,7 +29,7 @@ server {
listen [::]:{{ $listen_port }} ssl http2;
listen {{ $listen_port }} ssl http2;
server_name {{ $.NOSSL_SERVER_NAME }};
access_log /var/log/nginx/{{ $.APP }}-access.log;
access_log /var/log/nginx/{{ $.APP }}-access.log custom;
error_log /var/log/nginx/{{ $.APP }}-error.log;
ssl_certificate {{ $.APP_SSL_PATH }}/server.crt;
ssl_certificate_key {{ $.APP_SSL_PATH }}/server.key;
@@ -57,9 +48,34 @@ server {
proxy_pass http://{{ $.APP }}-{{ $upstream_port }};
{{ $proxy_settings }}
{{ $gzip_settings }}
{{ $cors_headers_options }}
{{ $cors_headers_post }}
{{ $cors_headers_get }}
# Handle CORS for OPTIONS method
if ($request_method = 'OPTIONS') {
add_header 'Access-Control-Allow-Origin' $allow_origin always;
add_header 'Access-Control-Allow-Methods' 'POST, GET, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'Content-Type, Authorization' always;
add_header 'Access-Control-Allow-Credentials' 'true' always;
add_header 'Access-Control-Max-Age' 1728000;
add_header 'Content-Type' 'text/plain; charset=utf-8';
add_header 'Content-Length' 0;
return 204;
}
# Handle CORS for POST method
if ($request_method = 'POST') {
add_header 'Access-Control-Allow-Origin' $allow_origin always;
add_header 'Access-Control-Allow-Methods' 'POST, GET, OPTIONS' always;
add_header 'Access-Control-Allow-Headers' 'Content-Type, Authorization' always;
add_header 'Access-Control-Allow-Credentials' 'true' always;
}
# Handle CORS for GET method
if ($request_method = 'GET') {
add_header 'Access-Control-Allow-Origin' $allow_origin always;
add_header 'Access-Control-Allow-Methods' 'POST, GET, OPTIONS' always;
add_header 'Access-Control-Allow-Headers' 'Content-Type, Authorization' always;
add_header 'Access-Control-Allow-Credentials' 'true' always;
}
proxy_cache my_cache;
proxy_cache_revalidate on;
@@ -69,26 +85,17 @@ server {
proxy_cache_lock on;
# Connections and request limits increase (bad for DDos)
limit_conn addr 10000;
limit_req zone=req_zone burst=10 nodelay;
}
# Custom location block for /upload
# location /upload {
# proxy_pass http://uploader-8080/;
# {{ $proxy_settings }}
# {{ $gzip_settings }}
# {{ $cors_headers_options }}
# {{ $cors_headers_post }}
# {{ $cors_headers_get }}
# }
location ~* \.(jpg|jpeg|png|gif|ico|css|js)$ {
expires 30d; # This means that the client can cache these resources for 30 days.
proxy_pass http://{{ $.APP }}-{{ $upstream_port }};
expires 30d;
add_header Cache-Control "public, no-transform";
}
location ~* \.(mp3|wav|ogg|flac|aac|aif|webm)$ {
proxy_pass http://{{ $.APP }}-{{ $upstream_port }};
if ($request_method = 'GET') {
add_header 'Access-Control-Allow-Origin' $allow_origin always;
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS' always;

View File

@@ -1,6 +1,6 @@
import time
from sqlalchemy import JSON, Boolean, Column, ForeignKey, Integer, String
from sqlalchemy import JSON, Boolean, Column, ForeignKey, Index, Integer, String
from services.db import Base
@@ -8,6 +8,15 @@ from services.db import Base
class AuthorRating(Base):
"""
Рейтинг автора от другого автора.
Attributes:
rater (int): ID оценивающего автора
author (int): ID оцениваемого автора
plus (bool): Положительная/отрицательная оценка
"""
__tablename__ = "author_rating"
id = None # type: ignore
@@ -15,8 +24,26 @@ class AuthorRating(Base):
author = Column(ForeignKey("author.id"), primary_key=True)
plus = Column(Boolean)
# Определяем индексы
__table_args__ = (
# Индекс для быстрого поиска всех оценок конкретного автора
Index("idx_author_rating_author", "author"),
# Индекс для быстрого поиска всех оценок, оставленных конкретным автором
Index("idx_author_rating_rater", "rater"),
)
class AuthorFollower(Base):
"""
Подписка одного автора на другого.
Attributes:
follower (int): ID подписчика
author (int): ID автора, на которого подписываются
created_at (int): Время создания подписки
auto (bool): Признак автоматической подписки
"""
__tablename__ = "author_follower"
id = None # type: ignore
@@ -25,16 +52,57 @@ class AuthorFollower(Base):
created_at = Column(Integer, nullable=False, default=lambda: int(time.time()))
auto = Column(Boolean, nullable=False, default=False)
# Определяем индексы
__table_args__ = (
# Индекс для быстрого поиска всех подписчиков автора
Index("idx_author_follower_author", "author"),
# Индекс для быстрого поиска всех авторов, на которых подписан конкретный автор
Index("idx_author_follower_follower", "follower"),
)
class AuthorBookmark(Base):
"""
Закладка автора на публикацию.
Attributes:
author (int): ID автора
shout (int): ID публикации
"""
__tablename__ = "author_bookmark"
id = None # type: ignore
author = Column(ForeignKey("author.id"), primary_key=True)
shout = Column(ForeignKey("shout.id"), primary_key=True)
# Определяем индексы
__table_args__ = (
# Индекс для быстрого поиска всех закладок автора
Index("idx_author_bookmark_author", "author"),
# Индекс для быстрого поиска всех авторов, добавивших публикацию в закладки
Index("idx_author_bookmark_shout", "shout"),
)
class Author(Base):
"""
Модель автора в системе.
Attributes:
user (str): Идентификатор пользователя в системе авторизации
name (str): Отображаемое имя
slug (str): Уникальный строковый идентификатор
bio (str): Краткая биография/статус
about (str): Полное описание
pic (str): URL изображения профиля
links (dict): Ссылки на социальные сети и сайты
created_at (int): Время создания профиля
last_seen (int): Время последнего посещения
updated_at (int): Время последнего обновления
deleted_at (int): Время удаления (если профиль удален)
"""
__tablename__ = "author"
user = Column(String) # unbounded link with authorizer's User type
@@ -53,3 +121,17 @@ class Author(Base):
# search_vector = Column(
# TSVectorType("name", "slug", "bio", "about", regconfig="pg_catalog.russian")
# )
# Определяем индексы
__table_args__ = (
# Индекс для быстрого поиска по slug
Index("idx_author_slug", "slug"),
# Индекс для быстрого поиска по идентификатору пользователя
Index("idx_author_user", "user"),
# Индекс для фильтрации неудаленных авторов
Index("idx_author_deleted_at", "deleted_at", postgresql_where=deleted_at.is_(None)),
# Индекс для сортировки по времени создания (для новых авторов)
Index("idx_author_created_at", "created_at"),
# Индекс для сортировки по времени последнего посещения
Index("idx_author_last_seen", "last_seen"),
)

55
orm/draft.py Normal file
View File

@@ -0,0 +1,55 @@
import time
from sqlalchemy import JSON, Boolean, Column, ForeignKey, Integer, String
from sqlalchemy.orm import relationship
from orm.author import Author
from orm.topic import Topic
from services.db import Base
class DraftTopic(Base):
__tablename__ = "draft_topic"
id = None # type: ignore
shout = Column(ForeignKey("draft.id"), primary_key=True, index=True)
topic = Column(ForeignKey("topic.id"), primary_key=True, index=True)
main = Column(Boolean, nullable=True)
class DraftAuthor(Base):
__tablename__ = "draft_author"
id = None # type: ignore
shout = Column(ForeignKey("draft.id"), primary_key=True, index=True)
author = Column(ForeignKey("author.id"), primary_key=True, index=True)
caption = Column(String, nullable=True, default="")
class Draft(Base):
__tablename__ = "draft"
# required
created_at: int = Column(Integer, nullable=False, default=lambda: int(time.time()))
created_by: int = Column(ForeignKey("author.id"), nullable=False)
# optional
layout: str = Column(String, nullable=True, default="article")
slug: str = Column(String, unique=True)
title: str = Column(String, nullable=True)
subtitle: str | None = Column(String, nullable=True)
lead: str | None = Column(String, nullable=True)
description: str | None = Column(String, nullable=True)
body: str = Column(String, nullable=False, comment="Body")
media: dict | None = Column(JSON, nullable=True)
cover: str | None = Column(String, nullable=True, comment="Cover image url")
cover_caption: str | None = Column(String, nullable=True, comment="Cover image alt caption")
lang: str = Column(String, nullable=False, default="ru", comment="Language")
seo: str | None = Column(String, nullable=True) # JSON
# auto
updated_at: int | None = Column(Integer, nullable=True, index=True)
deleted_at: int | None = Column(Integer, nullable=True, index=True)
updated_by: int | None = Column(ForeignKey("author.id"), nullable=True)
deleted_by: int | None = Column(ForeignKey("author.id"), nullable=True)
authors = relationship(Author, secondary="draft_author")
topics = relationship(Topic, secondary="draft_topic")

View File

@@ -1,6 +1,6 @@
import time
from sqlalchemy import JSON, Boolean, Column, ForeignKey, Integer, String
from sqlalchemy import JSON, Boolean, Column, ForeignKey, Index, Integer, String
from sqlalchemy.orm import relationship
from orm.author import Author
@@ -10,6 +10,15 @@ from services.db import Base
class ShoutTopic(Base):
"""
Связь между публикацией и темой.
Attributes:
shout (int): ID публикации
topic (int): ID темы
main (bool): Признак основной темы
"""
__tablename__ = "shout_topic"
id = None # type: ignore
@@ -17,6 +26,12 @@ class ShoutTopic(Base):
topic = Column(ForeignKey("topic.id"), primary_key=True, index=True)
main = Column(Boolean, nullable=True)
# Определяем дополнительные индексы
__table_args__ = (
# Оптимизированный составной индекс для запросов, которые ищут публикации по теме
Index("idx_shout_topic_topic_shout", "topic", "shout"),
)
class ShoutReactionsFollower(Base):
__tablename__ = "shout_reactions_followers"
@@ -30,6 +45,15 @@ class ShoutReactionsFollower(Base):
class ShoutAuthor(Base):
"""
Связь между публикацией и автором.
Attributes:
shout (int): ID публикации
author (int): ID автора
caption (str): Подпись автора
"""
__tablename__ = "shout_author"
id = None # type: ignore
@@ -37,8 +61,18 @@ class ShoutAuthor(Base):
author = Column(ForeignKey("author.id"), primary_key=True, index=True)
caption = Column(String, nullable=True, default="")
# Определяем дополнительные индексы
__table_args__ = (
# Оптимизированный индекс для запросов, которые ищут публикации по автору
Index("idx_shout_author_author_shout", "author", "shout"),
)
class Shout(Base):
"""
Публикация в системе.
"""
__tablename__ = "shout"
created_at: int = Column(Integer, nullable=False, default=lambda: int(time.time()))
@@ -72,3 +106,22 @@ class Shout(Base):
oid: str | None = Column(String, nullable=True)
seo: str | None = Column(String, nullable=True) # JSON
draft: int | None = Column(ForeignKey("draft.id"), nullable=True)
# Определяем индексы
__table_args__ = (
# Индекс для быстрого поиска неудаленных публикаций
Index("idx_shout_deleted_at", "deleted_at", postgresql_where=deleted_at.is_(None)),
# Индекс для быстрой фильтрации по community
Index("idx_shout_community", "community"),
# Индекс для быстрого поиска по slug
Index("idx_shout_slug", "slug"),
# Составной индекс для фильтрации опубликованных неудаленных публикаций
Index(
"idx_shout_published_deleted",
"published_at",
"deleted_at",
postgresql_where=published_at.is_not(None) & deleted_at.is_(None),
),
)

View File

@@ -1,11 +1,21 @@
import time
from sqlalchemy import JSON, Boolean, Column, ForeignKey, Integer, String
from sqlalchemy import JSON, Boolean, Column, ForeignKey, Index, Integer, String
from services.db import Base
class TopicFollower(Base):
"""
Связь между топиком и его подписчиком.
Attributes:
follower (int): ID подписчика
topic (int): ID топика
created_at (int): Время создания связи
auto (bool): Автоматическая подписка
"""
__tablename__ = "topic_followers"
id = None # type: ignore
@@ -14,8 +24,29 @@ class TopicFollower(Base):
created_at = Column(Integer, nullable=False, default=int(time.time()))
auto = Column(Boolean, nullable=False, default=False)
# Определяем индексы
__table_args__ = (
# Индекс для быстрого поиска всех подписчиков топика
Index("idx_topic_followers_topic", "topic"),
# Индекс для быстрого поиска всех топиков, на которые подписан автор
Index("idx_topic_followers_follower", "follower"),
)
class Topic(Base):
"""
Модель топика (темы) публикаций.
Attributes:
slug (str): Уникальный строковый идентификатор темы
title (str): Название темы
body (str): Описание темы
pic (str): URL изображения темы
community (int): ID сообщества
oid (str): Старый ID
parent_ids (list): IDs родительских тем
"""
__tablename__ = "topic"
slug = Column(String, unique=True)
@@ -24,5 +55,12 @@ class Topic(Base):
pic = Column(String, nullable=True, comment="Picture")
community = Column(ForeignKey("community.id"), default=1)
oid = Column(String, nullable=True, comment="Old ID")
parent_ids = Column(JSON, nullable=True, comment="Parent Topic IDs")
# Определяем индексы
__table_args__ = (
# Индекс для быстрого поиска по slug
Index("idx_topic_slug", "slug"),
# Индекс для быстрого поиска по сообществу
Index("idx_topic_community", "community"),
)

View File

@@ -1,51 +0,0 @@
[tool.poetry]
name = "core"
version = "0.4.7"
description = "core module for discours.io"
authors = ["discoursio devteam"]
license = "MIT"
readme = "README.md"
[tool.poetry.dependencies]
python = "^3.12"
SQLAlchemy = "^2.0.29"
psycopg2-binary = "^2.9.9"
redis = {extras = ["hiredis"], version = "^5.0.1"}
sentry-sdk = {version = "^1.44.1", extras = ["starlette", "ariadne", "sqlalchemy"]}
starlette = "^0.39.2"
gql = "^3.5.0"
ariadne = "^0.23.0"
pre-commit = "^3.7.0"
granian = "^1.4.1"
google-analytics-data = "^0.18.7"
opensearch-py = "^2.6.0"
httpx = "^0.27.0"
dogpile-cache = "^1.3.1"
colorlog = "^6.8.2"
fakeredis = "^2.25.1"
pydantic = "^2.9.2"
jwt = "^1.3.1"
authlib = "^1.3.2"
[tool.poetry.group.dev.dependencies]
ruff = "^0.4.7"
isort = "^5.13.2"
pydantic = "^2.9.2"
[build-system]
requires = ["poetry-core>=1.0.0"]
build-backend = "poetry.core.masonry.api"
[tool.pyright]
venvPath = "."
venv = ".venv"
[tool.isort]
multi_line_output = 3
include_trailing_comma = true
force_grid_wrap = 0
line_length = 120
[tool.ruff]
line-length = 120

6
requirements.dev.txt Normal file
View File

@@ -0,0 +1,6 @@
fakeredis
pytest
pytest-asyncio
pytest-cov
mypy
ruff

17
requirements.txt Normal file
View File

@@ -0,0 +1,17 @@
# own auth
bcrypt
authlib
passlib
opensearch-py
google-analytics-data
colorlog
psycopg2-binary
httpx
redis[hiredis]
sentry-sdk[starlette,sqlalchemy]
starlette
gql
ariadne
granian
orjson
pydantic

View File

@@ -11,7 +11,14 @@ from resolvers.author import ( # search_authors,
update_author,
)
from resolvers.community import get_communities_all, get_community
from resolvers.editor import create_shout, delete_shout, update_shout
from resolvers.draft import (
create_draft,
delete_draft,
load_drafts,
publish_draft,
unpublish_draft,
update_draft,
)
from resolvers.feed import (
load_shouts_coauthored,
load_shouts_discussed,
@@ -30,6 +37,7 @@ from resolvers.reaction import (
create_reaction,
delete_reaction,
load_comment_ratings,
load_comments_branch,
load_reactions_by,
load_shout_comments,
load_shout_ratings,
@@ -92,10 +100,6 @@ __all__ = [
"follow",
"unfollow",
"get_shout_followers",
# editor
"create_shout",
"update_shout",
"delete_shout",
# reaction
"create_reaction",
"update_reaction",
@@ -104,6 +108,7 @@ __all__ = [
"load_shout_comments",
"load_shout_ratings",
"load_comment_ratings",
"load_comments_branch",
# notifier
"load_notifications",
"notifications_seen_thread",
@@ -113,4 +118,13 @@ __all__ = [
"rate_author",
"get_my_rates_comments",
"get_my_rates_shouts",
# draft
"load_drafts",
"create_draft",
"update_draft",
"delete_draft",
"publish_draft",
"publish_shout",
"unpublish_shout",
"unpublish_draft",
]

View File

@@ -1,25 +1,196 @@
import asyncio
import time
from typing import Optional
from sqlalchemy import desc, select, text
from sqlalchemy import select, text
from cache.cache import (
cache_author,
cached_query,
get_cached_author,
get_cached_author_by_user_id,
get_cached_author_followers,
get_cached_follower_authors,
get_cached_follower_topics,
invalidate_cache_by_prefix,
)
from orm.author import Author
from orm.shout import ShoutAuthor, ShoutTopic
from orm.topic import Topic
from resolvers.stat import get_with_stat
from services.auth import login_required
from services.db import local_session
from services.redis import redis
from services.schema import mutation, query
from utils.logger import root_logger as logger
DEFAULT_COMMUNITIES = [1]
# Вспомогательная функция для получения всех авторов без статистики
async def get_all_authors():
"""
Получает всех авторов без статистики.
Используется для случаев, когда нужен полный список авторов без дополнительной информации.
Returns:
list: Список всех авторов без статистики
"""
cache_key = "authors:all:basic"
# Функция для получения всех авторов из БД
async def fetch_all_authors():
logger.debug("Получаем список всех авторов из БД и кешируем результат")
with local_session() as session:
# Запрос на получение базовой информации об авторах
authors_query = select(Author).where(Author.deleted_at.is_(None))
authors = session.execute(authors_query).scalars().all()
# Преобразуем авторов в словари
return [author.dict() for author in authors]
# Используем универсальную функцию для кеширования запросов
return await cached_query(cache_key, fetch_all_authors)
# Вспомогательная функция для получения авторов со статистикой с пагинацией
async def get_authors_with_stats(limit=50, offset=0, by: Optional[str] = None):
"""
Получает авторов со статистикой с пагинацией.
Args:
limit: Максимальное количество возвращаемых авторов
offset: Смещение для пагинации
by: Опциональный параметр сортировки (new/active)
Returns:
list: Список авторов с их статистикой
"""
# Формируем ключ кеша с помощью универсальной функции
cache_key = f"authors:stats:limit={limit}:offset={offset}"
# Функция для получения авторов из БД
async def fetch_authors_with_stats():
logger.debug(f"Выполняем запрос на получение авторов со статистикой: limit={limit}, offset={offset}, by={by}")
with local_session() as session:
# Базовый запрос для получения авторов
base_query = select(Author).where(Author.deleted_at.is_(None))
# Применяем сортировку
if by:
if isinstance(by, dict):
# Обработка словаря параметров сортировки
from sqlalchemy import asc, desc
for field, direction in by.items():
column = getattr(Author, field, None)
if column:
if direction.lower() == "desc":
base_query = base_query.order_by(desc(column))
else:
base_query = base_query.order_by(column)
elif by == "new":
base_query = base_query.order_by(desc(Author.created_at))
elif by == "active":
base_query = base_query.order_by(desc(Author.last_seen))
else:
# По умолчанию сортируем по времени создания
base_query = base_query.order_by(desc(Author.created_at))
else:
base_query = base_query.order_by(desc(Author.created_at))
# Применяем лимит и смещение
base_query = base_query.limit(limit).offset(offset)
# Получаем авторов
authors = session.execute(base_query).scalars().all()
author_ids = [author.id for author in authors]
if not author_ids:
return []
# Оптимизированный запрос для получения статистики по публикациям для авторов
shouts_stats_query = f"""
SELECT sa.author, COUNT(DISTINCT s.id) as shouts_count
FROM shout_author sa
JOIN shout s ON sa.shout = s.id AND s.deleted_at IS NULL AND s.published_at IS NOT NULL
WHERE sa.author IN ({",".join(map(str, author_ids))})
GROUP BY sa.author
"""
shouts_stats = {row[0]: row[1] for row in session.execute(text(shouts_stats_query))}
# Запрос на получение статистики по подписчикам для авторов
followers_stats_query = f"""
SELECT author, COUNT(DISTINCT follower) as followers_count
FROM author_follower
WHERE author IN ({",".join(map(str, author_ids))})
GROUP BY author
"""
followers_stats = {row[0]: row[1] for row in session.execute(text(followers_stats_query))}
# Формируем результат с добавлением статистики
result = []
for author in authors:
author_dict = author.dict()
author_dict["stat"] = {
"shouts": shouts_stats.get(author.id, 0),
"followers": followers_stats.get(author.id, 0),
}
result.append(author_dict)
# Кешируем каждого автора отдельно для использования в других функциях
await cache_author(author_dict)
return result
# Используем универсальную функцию для кеширования запросов
return await cached_query(cache_key, fetch_authors_with_stats)
# Функция для инвалидации кеша авторов
async def invalidate_authors_cache(author_id=None):
"""
Инвалидирует кеши авторов при изменении данных.
Args:
author_id: Опциональный ID автора для точечной инвалидации.
Если не указан, инвалидируются все кеши авторов.
"""
if author_id:
# Точечная инвалидация конкретного автора
logger.debug(f"Инвалидация кеша для автора #{author_id}")
specific_keys = [
f"author:id:{author_id}",
f"author:followers:{author_id}",
f"author:follows-authors:{author_id}",
f"author:follows-topics:{author_id}",
f"author:follows-shouts:{author_id}",
]
# Получаем user_id автора, если есть
with local_session() as session:
author = session.query(Author).filter(Author.id == author_id).first()
if author and author.user:
specific_keys.append(f"author:user:{author.user.strip()}")
# Удаляем конкретные ключи
for key in specific_keys:
try:
await redis.execute("DEL", key)
logger.debug(f"Удален ключ кеша {key}")
except Exception as e:
logger.error(f"Ошибка при удалении ключа {key}: {e}")
# Также ищем и удаляем ключи коллекций, содержащих данные об этом авторе
collection_keys = await redis.execute("KEYS", "authors:stats:*")
if collection_keys:
await redis.execute("DEL", *collection_keys)
logger.debug(f"Удалено {len(collection_keys)} коллекционных ключей авторов")
else:
# Общая инвалидация всех кешей авторов
logger.debug("Полная инвалидация кеша авторов")
await invalidate_cache_by_prefix("authors")
@mutation.field("update_author")
@login_required
@@ -51,10 +222,30 @@ async def update_author(_, info, profile):
@query.field("get_authors_all")
def get_authors_all(_, _info):
with local_session() as session:
authors = session.query(Author).all()
return authors
async def get_authors_all(_, _info):
"""
Получает список всех авторов без статистики.
Returns:
list: Список всех авторов
"""
return await get_all_authors()
@query.field("get_authors_paginated")
async def get_authors_paginated(_, _info, limit=50, offset=0, by=None):
"""
Получает список авторов с пагинацией и статистикой.
Args:
limit: Максимальное количество возвращаемых авторов
offset: Смещение для пагинации
by: Параметр сортировки (new/active)
Returns:
list: Список авторов с их статистикой
"""
return await get_authors_with_stats(limit, offset, by)
@query.field("get_author")
@@ -105,145 +296,105 @@ async def get_author_id(_, _info, user: str):
asyncio.create_task(cache_author(author_dict))
return author_with_stat
except Exception as exc:
import traceback
traceback.print_exc()
logger.error(exc)
logger.error(f"Error getting author: {exc}")
return None
@query.field("load_authors_by")
async def load_authors_by(_, _info, by, limit, offset):
logger.debug(f"loading authors by {by}")
authors_query = select(Author)
"""
Загружает авторов по заданному критерию с пагинацией.
if by.get("slug"):
authors_query = authors_query.filter(Author.slug.ilike(f"%{by['slug']}%"))
elif by.get("name"):
authors_query = authors_query.filter(Author.name.ilike(f"%{by['name']}%"))
elif by.get("topic"):
authors_query = (
authors_query.join(ShoutAuthor) # Первое соединение ShoutAuthor
.join(ShoutTopic, ShoutAuthor.shout == ShoutTopic.shout)
.join(Topic, ShoutTopic.topic == Topic.id)
.filter(Topic.slug == str(by["topic"]))
)
Args:
by: Критерий сортировки авторов (new/active)
limit: Максимальное количество возвращаемых авторов
offset: Смещение для пагинации
if by.get("last_seen"): # в unix time
before = int(time.time()) - by["last_seen"]
authors_query = authors_query.filter(Author.last_seen > before)
elif by.get("created_at"): # в unix time
before = int(time.time()) - by["created_at"]
authors_query = authors_query.filter(Author.created_at > before)
authors_query = authors_query.limit(limit).offset(offset)
with local_session() as session:
authors_nostat = session.execute(authors_query).all()
authors = []
for a in authors_nostat:
if isinstance(a, Author):
author_dict = await get_cached_author(a.id, get_with_stat)
if author_dict and isinstance(author_dict.get("shouts"), int):
authors.append(author_dict)
# order
order = by.get("order")
if order in ["shouts", "followers"]:
authors_query = authors_query.order_by(desc(text(f"{order}_stat")))
# group by
authors = get_with_stat(authors_query)
return authors or []
Returns:
list: Список авторов с учетом критерия
"""
# Используем оптимизированную функцию для получения авторов
return await get_authors_with_stats(limit, offset, by)
def get_author_id_from(slug="", user=None, author_id=None):
if not slug and not user and not author_id:
raise ValueError("One of slug, user, or author_id must be provided")
author_query = select(Author.id)
if user:
author_query = author_query.filter(Author.user == user)
elif slug:
author_query = author_query.filter(Author.slug == slug)
elif author_id:
author_query = author_query.filter(Author.id == author_id)
try:
author_id = None
if author_id:
return author_id
with local_session() as session:
author_id_result = session.execute(author_query).first()
author_id = author_id_result[0] if author_id_result else None
if not author_id:
raise ValueError("Author not found")
author = None
if slug:
author = session.query(Author).filter(Author.slug == slug).first()
if author:
author_id = author.id
return author_id
if user:
author = session.query(Author).filter(Author.user == user).first()
if author:
author_id = author.id
except Exception as exc:
logger.error(exc)
return author_id
@query.field("get_author_follows")
async def get_author_follows(_, _info, slug="", user=None, author_id=0):
try:
author_id = get_author_id_from(slug, user, author_id)
logger.debug(f"getting follows for @{slug}")
author_id = get_author_id_from(slug=slug, user=user, author_id=author_id)
if not author_id:
return {}
if bool(author_id):
logger.debug(f"getting {author_id} follows authors")
authors = await get_cached_follower_authors(author_id)
topics = await get_cached_follower_topics(author_id)
followed_authors = await get_cached_follower_authors(author_id)
followed_topics = await get_cached_follower_topics(author_id)
# TODO: Get followed communities too
return {
"topics": topics,
"authors": authors,
"communities": [{"id": 1, "name": "Дискурс", "slug": "discours", "pic": ""}],
"authors": followed_authors,
"topics": followed_topics,
"communities": DEFAULT_COMMUNITIES,
"shouts": [],
}
except Exception:
import traceback
traceback.print_exc()
return {"error": "Author not found"}
@query.field("get_author_follows_topics")
async def get_author_follows_topics(_, _info, slug="", user=None, author_id=None):
try:
follower_id = get_author_id_from(slug, user, author_id)
topics = await get_cached_follower_topics(follower_id)
return topics
except Exception:
import traceback
traceback.print_exc()
logger.debug(f"getting followed topics for @{slug}")
author_id = get_author_id_from(slug=slug, user=user, author_id=author_id)
if not author_id:
return []
followed_topics = await get_cached_follower_topics(author_id)
return followed_topics
@query.field("get_author_follows_authors")
async def get_author_follows_authors(_, _info, slug="", user=None, author_id=None):
try:
follower_id = get_author_id_from(slug, user, author_id)
return await get_cached_follower_authors(follower_id)
except Exception:
import traceback
traceback.print_exc()
logger.debug(f"getting followed authors for @{slug}")
author_id = get_author_id_from(slug=slug, user=user, author_id=author_id)
if not author_id:
return []
followed_authors = await get_cached_follower_authors(author_id)
return followed_authors
def create_author(user_id: str, slug: str, name: str = ""):
author = Author()
author.user = user_id # Связь с user_id из системы авторизации
author.slug = slug # Идентификатор из системы авторизации
author.created_at = author.updated_at = int(time.time())
author.name = name or slug # если не указано
with local_session() as session:
try:
author = None
if user_id:
author = session.query(Author).filter(Author.user == user_id).first()
elif slug:
author = session.query(Author).filter(Author.slug == slug).first()
if not author:
new_author = Author(user=user_id, slug=slug, name=name)
session.add(new_author)
session.add(author)
session.commit()
logger.info(f"author created by webhook {new_author.dict()}")
except Exception as exc:
logger.debug(exc)
return author
@query.field("get_author_followers")
async def get_author_followers(_, _info, slug: str = "", user: str = "", author_id: int = 0):
logger.debug(f"getting followers for @{slug}")
logger.debug(f"getting followers for author @{slug} or ID:{author_id}")
author_id = get_author_id_from(slug=slug, user=user, author_id=author_id)
followers = []
if author_id:
if not author_id:
return []
followers = await get_cached_author_followers(author_id)
return followers

352
resolvers/draft.py Normal file
View File

@@ -0,0 +1,352 @@
import time
from operator import or_
from sqlalchemy.sql import and_
from cache.cache import (
cache_author,
cache_by_id,
cache_topic,
invalidate_shout_related_cache,
invalidate_shouts_cache,
)
from orm.author import Author
from orm.draft import Draft
from orm.shout import Shout, ShoutAuthor, ShoutTopic
from orm.topic import Topic
from services.auth import login_required
from services.db import local_session
from services.notify import notify_shout
from services.schema import mutation, query
from services.search import search_service
from utils.logger import root_logger as logger
def create_shout_from_draft(session, draft, author_id):
# Создаем новую публикацию
shout = Shout(
body=draft.body,
slug=draft.slug,
cover=draft.cover,
cover_caption=draft.cover_caption,
lead=draft.lead,
description=draft.description,
title=draft.title,
subtitle=draft.subtitle,
layout=draft.layout,
media=draft.media,
lang=draft.lang,
seo=draft.seo,
created_by=author_id,
community=draft.community,
draft=draft.id,
deleted_at=None,
)
return shout
@query.field("load_drafts")
@login_required
async def load_drafts(_, info):
user_id = info.context.get("user_id")
author_dict = info.context.get("author", {})
author_id = author_dict.get("id")
if not user_id or not author_id:
return {"error": "User ID and author ID are required"}
with local_session() as session:
drafts = (
session.query(Draft)
.filter(or_(Draft.authors.any(Author.id == author_id), Draft.created_by == author_id))
.all()
)
return {"drafts": drafts}
@mutation.field("create_draft")
@login_required
async def create_draft(_, info, draft_input):
"""Create a new draft.
Args:
info: GraphQL context
draft_input (dict): Draft data including optional fields:
- title (str, required) - заголовок черновика
- body (str, required) - текст черновика
- slug (str)
- etc.
Returns:
dict: Contains either:
- draft: The created draft object
- error: Error message if creation failed
Example:
>>> async def test_create():
... context = {'user_id': '123', 'author': {'id': 1}}
... info = type('Info', (), {'context': context})()
... result = await create_draft(None, info, {'title': 'Test'})
... assert result.get('error') is None
... assert result['draft'].title == 'Test'
... return result
"""
user_id = info.context.get("user_id")
author_dict = info.context.get("author", {})
author_id = author_dict.get("id")
if not user_id or not author_id:
return {"error": "Author ID is required"}
# Проверяем обязательные поля
if "body" not in draft_input or not draft_input["body"]:
draft_input["body"] = "" # Пустая строка вместо NULL
if "title" not in draft_input or not draft_input["title"]:
draft_input["title"] = "" # Пустая строка вместо NULL
try:
with local_session() as session:
# Remove id from input if present since it's auto-generated
if "id" in draft_input:
del draft_input["id"]
# Добавляем текущее время создания
draft_input["created_at"] = int(time.time())
draft = Draft(created_by=author_id, **draft_input)
session.add(draft)
session.commit()
return {"draft": draft}
except Exception as e:
logger.error(f"Failed to create draft: {e}", exc_info=True)
return {"error": f"Failed to create draft: {str(e)}"}
@mutation.field("update_draft")
@login_required
async def update_draft(_, info, draft_id: int, draft_input):
"""Обновляет черновик публикации.
Args:
draft_id: ID черновика для обновления
draft_input: Данные для обновления черновика
Returns:
dict: Обновленный черновик или сообщение об ошибке
"""
user_id = info.context.get("user_id")
author_dict = info.context.get("author", {})
author_id = author_dict.get("id")
if not user_id or not author_id:
return {"error": "Author ID are required"}
with local_session() as session:
draft = session.query(Draft).filter(Draft.id == draft_id).first()
if not draft:
return {"error": "Draft not found"}
Draft.update(draft, draft_input)
draft.updated_at = int(time.time())
session.commit()
return {"draft": draft}
@mutation.field("delete_draft")
@login_required
async def delete_draft(_, info, draft_id: int):
author_dict = info.context.get("author", {})
author_id = author_dict.get("id")
with local_session() as session:
draft = session.query(Draft).filter(Draft.id == draft_id).first()
if not draft:
return {"error": "Draft not found"}
if author_id != draft.created_by and draft.authors.filter(Author.id == author_id).count() == 0:
return {"error": "You are not allowed to delete this draft"}
session.delete(draft)
session.commit()
return {"draft": draft}
@mutation.field("publish_draft")
@login_required
async def publish_draft(_, info, draft_id: int):
user_id = info.context.get("user_id")
author_dict = info.context.get("author", {})
author_id = author_dict.get("id")
if not user_id or not author_id:
return {"error": "User ID and author ID are required"}
with local_session() as session:
draft = session.query(Draft).filter(Draft.id == draft_id).first()
if not draft:
return {"error": "Draft not found"}
shout = create_shout_from_draft(session, draft, author_id)
session.add(shout)
session.commit()
return {"shout": shout, "draft": draft}
@mutation.field("unpublish_draft")
@login_required
async def unpublish_draft(_, info, draft_id: int):
user_id = info.context.get("user_id")
author_dict = info.context.get("author", {})
author_id = author_dict.get("id")
if not user_id or not author_id:
return {"error": "User ID and author ID are required"}
with local_session() as session:
draft = session.query(Draft).filter(Draft.id == draft_id).first()
if not draft:
return {"error": "Draft not found"}
shout = session.query(Shout).filter(Shout.draft == draft.id).first()
if shout:
shout.published_at = None
session.commit()
return {"shout": shout, "draft": draft}
return {"error": "Failed to unpublish draft"}
@mutation.field("publish_shout")
@login_required
async def publish_shout(_, info, shout_id: int):
"""Publish draft as a shout or update existing shout.
Args:
shout_id: ID существующей публикации или 0 для новой
draft: Объект черновика (опционально)
"""
user_id = info.context.get("user_id")
author_dict = info.context.get("author", {})
author_id = author_dict.get("id")
now = int(time.time())
if not user_id or not author_id:
return {"error": "User ID and author ID are required"}
try:
with local_session() as session:
shout = session.query(Shout).filter(Shout.id == shout_id).first()
if not shout:
return {"error": "Shout not found"}
was_published = shout.published_at is not None
draft = session.query(Draft).where(Draft.id == shout.draft).first()
if not draft:
return {"error": "Draft not found"}
# Находим черновик если не передан
if not shout:
shout = create_shout_from_draft(session, draft, author_id)
else:
# Обновляем существующую публикацию
shout.draft = draft.id
shout.created_by = author_id
shout.title = draft.title
shout.subtitle = draft.subtitle
shout.body = draft.body
shout.cover = draft.cover
shout.cover_caption = draft.cover_caption
shout.lead = draft.lead
shout.description = draft.description
shout.layout = draft.layout
shout.media = draft.media
shout.lang = draft.lang
shout.seo = draft.seo
draft.updated_at = now
shout.updated_at = now
# Устанавливаем published_at только если была ранее снята с публикации
if not was_published:
shout.published_at = now
# Обрабатываем связи с авторами
if (
not session.query(ShoutAuthor)
.filter(and_(ShoutAuthor.shout == shout.id, ShoutAuthor.author == author_id))
.first()
):
sa = ShoutAuthor(shout=shout.id, author=author_id)
session.add(sa)
# Обрабатываем темы
if draft.topics:
for topic in draft.topics:
st = ShoutTopic(
topic=topic.id, shout=shout.id, main=topic.main if hasattr(topic, "main") else False
)
session.add(st)
session.add(shout)
session.add(draft)
session.flush()
# Инвалидируем кэш только если это новая публикация или была снята с публикации
if not was_published:
cache_keys = ["feed", f"author_{author_id}", "random_top", "unrated"]
# Добавляем ключи для тем
for topic in shout.topics:
cache_keys.append(f"topic_{topic.id}")
cache_keys.append(f"topic_shouts_{topic.id}")
await cache_by_id(Topic, topic.id, cache_topic)
# Инвалидируем кэш
await invalidate_shouts_cache(cache_keys)
await invalidate_shout_related_cache(shout, author_id)
# Обновляем кэш авторов
for author in shout.authors:
await cache_by_id(Author, author.id, cache_author)
# Отправляем уведомление о публикации
await notify_shout(shout.dict(), "published")
# Обновляем поисковый индекс
search_service.index(shout)
else:
# Для уже опубликованных материалов просто отправляем уведомление об обновлении
await notify_shout(shout.dict(), "update")
session.commit()
return {"shout": shout}
except Exception as e:
logger.error(f"Failed to publish shout: {e}", exc_info=True)
if "session" in locals():
session.rollback()
return {"error": f"Failed to publish shout: {str(e)}"}
@mutation.field("unpublish_shout")
@login_required
async def unpublish_shout(_, info, shout_id: int):
"""Unpublish a shout.
Args:
shout_id: The ID of the shout to unpublish
Returns:
dict: The unpublished shout or an error message
"""
author_dict = info.context.get("author", {})
author_id = author_dict.get("id")
if not author_id:
return {"error": "Author ID is required"}
shout = None
with local_session() as session:
try:
shout = session.query(Shout).filter(Shout.id == shout_id).first()
shout.published_at = None
session.commit()
invalidate_shout_related_cache(shout)
invalidate_shouts_cache()
except Exception:
session.rollback()
return {"error": "Failed to unpublish shout"}
return {"shout": shout}

View File

@@ -1,10 +1,16 @@
import time
import orjson
from sqlalchemy import and_, desc, select
from sqlalchemy.orm import joinedload
from sqlalchemy.sql.functions import coalesce
from cache.cache import cache_author, cache_topic, invalidate_shout_related_cache, invalidate_shouts_cache
from cache.cache import (
cache_author,
cache_topic,
invalidate_shout_related_cache,
invalidate_shouts_cache,
)
from orm.author import Author
from orm.shout import Shout, ShoutAuthor, ShoutTopic
from orm.topic import Topic
@@ -13,12 +19,29 @@ from resolvers.stat import get_with_stat
from services.auth import login_required
from services.db import local_session
from services.notify import notify_shout
from services.schema import mutation, query
from services.schema import query
from services.search import search_service
from utils.logger import root_logger as logger
async def cache_by_id(entity, entity_id: int, cache_method):
"""Cache an entity by its ID using the provided cache method.
Args:
entity: The SQLAlchemy model class to query
entity_id (int): The ID of the entity to cache
cache_method: The caching function to use
Returns:
dict: The cached entity data if successful, None if entity not found
Example:
>>> async def test_cache():
... author = await cache_by_id(Author, 1, cache_author)
... assert author['id'] == 1
... assert 'name' in author
... return author
"""
caching_query = select(entity).filter(entity.id == entity_id)
result = get_with_stat(caching_query)
if not result or not result[0]:
@@ -33,7 +56,34 @@ async def cache_by_id(entity, entity_id: int, cache_method):
@query.field("get_my_shout")
@login_required
async def get_my_shout(_, info, shout_id: int):
# logger.debug(info)
"""Get a shout by ID if the requesting user has permission to view it.
DEPRECATED: use `load_drafts` instead
Args:
info: GraphQL resolver info containing context
shout_id (int): ID of the shout to retrieve
Returns:
dict: Contains either:
- error (str): Error message if retrieval failed
- shout (Shout): The requested shout if found and accessible
Permissions:
User must be:
- The shout creator
- Listed as an author
- Have editor role
Example:
>>> async def test_get_my_shout():
... context = {'user_id': '123', 'author': {'id': 1}, 'roles': []}
... info = type('Info', (), {'context': context})()
... result = await get_my_shout(None, info, 1)
... assert result['error'] is None
... assert result['shout'].id == 1
... return result
"""
user_id = info.context.get("user_id", "")
author_dict = info.context.get("author", {})
author_id = author_dict.get("id")
@@ -52,13 +102,26 @@ async def get_my_shout(_, info, shout_id: int):
if not shout:
return {"error": "no shout found", "shout": None}
# Преобразуем media JSON в список объектов MediaItem
if hasattr(shout, "media") and shout.media:
if isinstance(shout.media, str):
try:
shout.media = orjson.loads(shout.media)
except Exception as e:
logger.error(f"Error parsing shout media: {e}")
shout.media = []
if not isinstance(shout.media, list):
shout.media = [shout.media] if shout.media else []
else:
shout.media = []
logger.debug(f"got {len(shout.authors)} shout authors, created by {shout.created_by}")
is_editor = "editor" in roles
logger.debug(f'viewer is{'' if is_editor else ' not'} editor')
logger.debug(f"viewer is{'' if is_editor else ' not'} editor")
is_creator = author_id == shout.created_by
logger.debug(f'viewer is{'' if is_creator else ' not'} creator')
logger.debug(f"viewer is{'' if is_creator else ' not'} creator")
is_author = bool(list(filter(lambda x: x.id == int(author_id), [x for x in shout.authors])))
logger.debug(f'viewer is{'' if is_creator else ' not'} author')
logger.debug(f"viewer is{'' if is_creator else ' not'} author")
can_edit = is_editor or is_author or is_creator
if not can_edit:
@@ -91,8 +154,8 @@ async def get_shouts_drafts(_, info):
return {"shouts": shouts}
@mutation.field("create_shout")
@login_required
# @mutation.field("create_shout")
# @login_required
async def create_shout(_, info, inp):
logger.info(f"Starting create_shout with input: {inp}")
user_id = info.context.get("user_id")
@@ -199,89 +262,167 @@ async def create_shout(_, info, inp):
return {"error": error_msg}
def patch_main_topic(session, main_topic, shout):
def patch_main_topic(session, main_topic_slug, shout):
"""Update the main topic for a shout."""
logger.info(f"Starting patch_main_topic for shout#{shout.id} with slug '{main_topic_slug}'")
logger.debug(f"Current shout topics: {[(t.topic.slug, t.main) for t in shout.topics]}")
with session.begin():
shout = session.query(Shout).options(joinedload(Shout.topics)).filter(Shout.id == shout.id).first()
if not shout:
return
old_main_topic = (
# Получаем текущий главный топик
old_main = (
session.query(ShoutTopic).filter(and_(ShoutTopic.shout == shout.id, ShoutTopic.main.is_(True))).first()
)
if old_main:
logger.info(f"Found current main topic: {old_main.topic.slug}")
else:
logger.info("No current main topic found")
main_topic = session.query(Topic).filter(Topic.slug == main_topic).first()
# Находим новый главный топик
main_topic = session.query(Topic).filter(Topic.slug == main_topic_slug).first()
if not main_topic:
logger.error(f"Main topic with slug '{main_topic_slug}' not found")
return
if main_topic:
new_main_topic = (
logger.info(f"Found new main topic: {main_topic.slug} (id={main_topic.id})")
# Находим связь с новым главным топиком
new_main = (
session.query(ShoutTopic)
.filter(and_(ShoutTopic.shout == shout.id, ShoutTopic.topic == main_topic.id))
.first()
)
logger.debug(f"Found new main topic relation: {new_main is not None}")
if old_main_topic and new_main_topic and old_main_topic is not new_main_topic:
ShoutTopic.update(old_main_topic, {"main": False})
session.add(old_main_topic)
if old_main and new_main and old_main is not new_main:
logger.info(f"Updating main topic flags: {old_main.topic.slug} -> {new_main.topic.slug}")
old_main.main = False
session.add(old_main)
ShoutTopic.update(new_main_topic, {"main": True})
session.add(new_main_topic)
new_main.main = True
session.add(new_main)
session.flush()
logger.info(f"Main topic updated for shout#{shout.id}")
else:
logger.warning(f"No changes needed for main topic (old={old_main is not None}, new={new_main is not None})")
def patch_topics(session, shout, topics_input):
"""Update the topics associated with a shout.
Args:
session: SQLAlchemy session
shout (Shout): The shout to update
topics_input (list): List of topic dicts with fields:
- id (int): Topic ID (<0 for new topics)
- slug (str): Topic slug
- title (str): Topic title (for new topics)
Side Effects:
- Creates new topics if needed
- Updates shout-topic associations
- Refreshes shout object with new topics
Example:
>>> def test_patch_topics():
... topics = [
... {'id': -1, 'slug': 'new-topic', 'title': 'New Topic'},
... {'id': 1, 'slug': 'existing-topic'}
... ]
... with local_session() as session:
... shout = session.query(Shout).first()
... patch_topics(session, shout, topics)
... assert len(shout.topics) == 2
... assert any(t.slug == 'new-topic' for t in shout.topics)
... return shout.topics
"""
logger.info(f"Starting patch_topics for shout#{shout.id}")
logger.info(f"Received topics_input: {topics_input}")
# Создаем новые топики если есть
new_topics_to_link = [Topic(**new_topic) for new_topic in topics_input if new_topic["id"] < 0]
if new_topics_to_link:
logger.info(f"Creating new topics: {[t.dict() for t in new_topics_to_link]}")
session.add_all(new_topics_to_link)
session.commit()
session.flush()
for new_topic_to_link in new_topics_to_link:
created_unlinked_topic = ShoutTopic(shout=shout.id, topic=new_topic_to_link.id)
session.add(created_unlinked_topic)
# Получаем текущие связи
current_links = session.query(ShoutTopic).filter(ShoutTopic.shout == shout.id).all()
logger.info(f"Current topic links: {[{t.topic: t.main} for t in current_links]}")
existing_topics_input = [topic_input for topic_input in topics_input if topic_input.get("id", 0) > 0]
existing_topic_to_link_ids = [
existing_topic_input["id"]
for existing_topic_input in existing_topics_input
if existing_topic_input["id"] not in [topic.id for topic in shout.topics]
]
# Удаляем старые связи
if current_links:
logger.info(f"Removing old topic links for shout#{shout.id}")
for link in current_links:
session.delete(link)
session.flush()
for existing_topic_to_link_id in existing_topic_to_link_ids:
created_unlinked_topic = ShoutTopic(shout=shout.id, topic=existing_topic_to_link_id)
session.add(created_unlinked_topic)
# Создаем новые связи
for topic_input in topics_input:
topic_id = topic_input["id"]
if topic_id < 0:
topic = next(t for t in new_topics_to_link if t.slug == topic_input["slug"])
topic_id = topic.id
topic_to_unlink_ids = [
topic.id
for topic in shout.topics
if topic.id not in [topic_input["id"] for topic_input in existing_topics_input]
]
logger.info(f"Creating new topic link: shout#{shout.id} -> topic#{topic_id}")
new_link = ShoutTopic(shout=shout.id, topic=topic_id, main=False)
session.add(new_link)
session.query(ShoutTopic).filter(
and_(ShoutTopic.shout == shout.id, ShoutTopic.topic.in_(topic_to_unlink_ids))
).delete(synchronize_session=False)
session.flush()
# Обновляем связи в объекте шаута
session.refresh(shout)
logger.info(f"Successfully updated topics for shout#{shout.id}")
logger.info(f"Final shout topics: {[t.dict() for t in shout.topics]}")
@mutation.field("update_shout")
@login_required
# @mutation.field("update_shout")
# @login_required
async def update_shout(_, info, shout_id: int, shout_input=None, publish=False):
logger.info(f"Starting update_shout with id={shout_id}, publish={publish}")
logger.debug(f"Full shout_input: {shout_input}")
user_id = info.context.get("user_id")
roles = info.context.get("roles", [])
author_dict = info.context.get("author")
if not author_dict:
logger.error("Author profile not found")
return {"error": "author profile was not found"}
author_id = author_dict.get("id")
shout_input = shout_input or {}
current_time = int(time.time())
shout_id = shout_id or shout_input.get("id", shout_id)
slug = shout_input.get("slug")
if not user_id:
logger.error("Unauthorized update attempt")
return {"error": "unauthorized"}
try:
with local_session() as session:
if author_id:
logger.info(f"author for shout#{shout_id} detected author #{author_id}")
shout_by_id = session.query(Shout).filter(Shout.id == shout_id).first()
logger.info(f"Processing update for shout#{shout_id} by author #{author_id}")
shout_by_id = (
session.query(Shout)
.options(joinedload(Shout.topics).joinedload(ShoutTopic.topic), joinedload(Shout.authors))
.filter(Shout.id == shout_id)
.first()
)
if not shout_by_id:
logger.error(f"shout#{shout_id} not found")
return {"error": "shout not found"}
logger.info(f"shout#{shout_id} found")
logger.info(f"Found shout#{shout_id}")
# Логируем текущие топики
current_topics = (
[{"id": t.id, "slug": t.slug, "title": t.title} for t in shout_by_id.topics]
if shout_by_id.topics
else []
)
logger.info(f"Current topics for shout#{shout_id}: {current_topics}")
if slug != shout_by_id.slug:
same_slug_shout = session.query(Shout).filter(Shout.slug == slug).first()
@@ -294,24 +435,38 @@ async def update_shout(_, info, shout_id: int, shout_input=None, publish=False):
logger.info(f"shout#{shout_id} slug patched")
if filter(lambda x: x.id == author_id, [x for x in shout_by_id.authors]) or "editor" in roles:
logger.info(f"shout#{shout_id} is author or editor")
logger.info(f"Author #{author_id} has permission to edit shout#{shout_id}")
# topics patch
topics_input = shout_input.get("topics")
if topics_input:
logger.info(f"topics_input: {topics_input}")
logger.info(f"Received topics_input for shout#{shout_id}: {topics_input}")
try:
patch_topics(session, shout_by_id, topics_input)
logger.info(f"Successfully patched topics for shout#{shout_id}")
# Обновляем связи в сессии после patch_topics
session.refresh(shout_by_id)
except Exception as e:
logger.error(f"Error patching topics: {e}", exc_info=True)
return {"error": f"Failed to update topics: {str(e)}"}
del shout_input["topics"]
for tpc in topics_input:
await cache_by_id(Topic, tpc["id"], cache_topic)
else:
logger.warning(f"No topics_input received for shout#{shout_id}")
# main topic
main_topic = shout_input.get("main_topic")
if main_topic:
logger.info(f"Updating main topic for shout#{shout_id} to {main_topic}")
patch_main_topic(session, main_topic, shout_by_id)
shout_input["updated_at"] = current_time
if publish:
logger.info(f"publishing shout#{shout_id} with input: {shout_input}")
logger.info(f"Publishing shout#{shout_id}")
shout_input["published_at"] = current_time
# Проверяем наличие связи с автором
logger.info(f"Checking author link for shout#{shout_id} and author#{author_id}")
@@ -330,11 +485,27 @@ async def update_shout(_, info, shout_id: int, shout_input=None, publish=False):
else:
logger.info("Author link already exists")
# Логируем финальное состояние перед сохранением
logger.info(f"Final shout_input for update: {shout_input}")
Shout.update(shout_by_id, shout_input)
session.add(shout_by_id)
session.commit()
shout_dict = shout_by_id.dict()
try:
session.commit()
# Обновляем объект после коммита чтобы получить все связи
session.refresh(shout_by_id)
logger.info(f"Successfully committed updates for shout#{shout_id}")
except Exception as e:
logger.error(f"Commit failed: {e}", exc_info=True)
return {"error": f"Failed to save changes: {str(e)}"}
# После обновления проверяем топики
updated_topics = (
[{"id": t.id, "slug": t.slug, "title": t.title} for t in shout_by_id.topics]
if shout_by_id.topics
else []
)
logger.info(f"Updated topics for shout#{shout_id}: {updated_topics}")
# Инвалидация кэша после обновления
try:
@@ -366,31 +537,66 @@ async def update_shout(_, info, shout_id: int, shout_input=None, publish=False):
logger.warning(f"Cache invalidation error: {cache_error}", exc_info=True)
if not publish:
await notify_shout(shout_dict, "update")
await notify_shout(shout_by_id.dict(), "update")
else:
await notify_shout(shout_dict, "published")
await notify_shout(shout_by_id.dict(), "published")
# search service indexing
search_service.index(shout_by_id)
for a in shout_by_id.authors:
await cache_by_id(Author, a.id, cache_author)
logger.info(f"shout#{shout_id} updated")
# Получаем полные данные шаута со связями
shout_with_relations = (
session.query(Shout)
.options(joinedload(Shout.topics).joinedload(ShoutTopic.topic), joinedload(Shout.authors))
.filter(Shout.id == shout_id)
.first()
)
# Создаем словарь с базовыми полями
shout_dict = shout_with_relations.dict()
# Явно добавляем связанные данные
shout_dict["topics"] = (
[
{"id": topic.id, "slug": topic.slug, "title": topic.title}
for topic in shout_with_relations.topics
]
if shout_with_relations.topics
else []
)
# Add main_topic to the shout dictionary
shout_dict["main_topic"] = get_main_topic(shout_with_relations.topics)
shout_dict["authors"] = (
[
{"id": author.id, "name": author.name, "slug": author.slug}
for author in shout_with_relations.authors
]
if shout_with_relations.authors
else []
)
logger.info(f"Final shout data with relations: {shout_dict}")
logger.debug(
f"Loaded topics details: {[(t.topic.slug if t.topic else 'no-topic', t.main) for t in shout_with_relations.topics]}"
)
return {"shout": shout_dict, "error": None}
else:
logger.warning(f"updater for shout#{shout_id} is not author or editor")
logger.warning(f"Access denied: author #{author_id} cannot edit shout#{shout_id}")
return {"error": "access denied", "shout": None}
except Exception as exc:
import traceback
traceback.print_exc()
logger.error(exc)
logger.error(f" cannot update with data: {shout_input}")
logger.error(f"Unexpected error in update_shout: {exc}", exc_info=True)
logger.error(f"Failed input data: {shout_input}")
return {"error": "cant update shout"}
return {"error": "cant update shout"}
@mutation.field("delete_shout")
@login_required
# @mutation.field("delete_shout")
# @login_required
async def delete_shout(_, info, shout_id: int):
user_id = info.context.get("user_id")
roles = info.context.get("roles", [])
@@ -425,3 +631,45 @@ async def delete_shout(_, info, shout_id: int):
return {"error": None}
else:
return {"error": "access denied"}
def get_main_topic(topics):
"""Get the main topic from a list of ShoutTopic objects."""
logger.info(f"Starting get_main_topic with {len(topics) if topics else 0} topics")
logger.debug(
f"Topics data: {[(t.topic.slug if t.topic else 'no-topic', t.main) for t in topics] if topics else []}"
)
if not topics:
logger.warning("No topics provided to get_main_topic")
return {"id": 0, "title": "no topic", "slug": "notopic", "is_main": True}
# Find first main topic in original order
main_topic_rel = next((st for st in topics if st.main), None)
logger.debug(
f"Found main topic relation: {main_topic_rel.topic.slug if main_topic_rel and main_topic_rel.topic else None}"
)
if main_topic_rel and main_topic_rel.topic:
result = {
"slug": main_topic_rel.topic.slug,
"title": main_topic_rel.topic.title,
"id": main_topic_rel.topic.id,
"is_main": True,
}
logger.info(f"Returning main topic: {result}")
return result
# If no main found but topics exist, return first
if topics and topics[0].topic:
logger.info(f"No main topic found, using first topic: {topics[0].topic.slug}")
result = {
"slug": topics[0].topic.slug,
"title": topics[0].topic.title,
"id": topics[0].topic.id,
"is_main": True,
}
return result
logger.warning("No valid topics found, returning default")
return {"slug": "notopic", "title": "no topic", "id": 0, "is_main": True}

View File

@@ -5,7 +5,12 @@ from sqlalchemy import and_, select
from orm.author import Author, AuthorFollower
from orm.shout import Shout, ShoutAuthor, ShoutReactionsFollower, ShoutTopic
from orm.topic import Topic, TopicFollower
from resolvers.reader import apply_options, get_shouts_with_links, has_field, query_with_stat
from resolvers.reader import (
apply_options,
get_shouts_with_links,
has_field,
query_with_stat,
)
from services.auth import login_required
from services.db import local_session
from services.schema import query
@@ -173,3 +178,21 @@ async def load_shouts_with_topic(_, info, slug: str, options) -> List[Shout]:
except Exception as error:
logger.debug(error)
return []
def apply_filters(q, filters):
"""
Применяет фильтры к запросу
"""
logger.info(f"Applying filters: {filters}")
if filters.get("published"):
q = q.filter(Shout.published_at.is_not(None))
logger.info("Added published filter")
if filters.get("topic"):
topic_slug = filters["topic"]
q = q.join(ShoutTopic).join(Topic).filter(Topic.slug == topic_slug)
logger.info(f"Added topic filter: {topic_slug}")
return q

View File

@@ -1,7 +1,7 @@
import json
import time
from typing import List, Tuple
import orjson
from sqlalchemy import and_, select
from sqlalchemy.exc import SQLAlchemyError
from sqlalchemy.orm import aliased
@@ -115,7 +115,7 @@ def get_notifications_grouped(author_id: int, after: int = 0, limit: int = 10, o
if (groups_amount + offset) >= limit:
break
payload = json.loads(str(notification.payload))
payload = orjson.loads(str(notification.payload))
if str(notification.entity) == NotificationEntity.SHOUT.value:
shout = payload
@@ -177,7 +177,7 @@ def get_notifications_grouped(author_id: int, after: int = 0, limit: int = 10, o
elif str(notification.entity) == "follower":
thread_id = "followers"
follower = json.loads(payload)
follower = orjson.loads(payload)
group = groups_by_thread.get(thread_id)
if group:
if str(notification.action) == "follow":
@@ -293,11 +293,11 @@ async def notifications_seen_thread(_, info, thread: str, after: int):
)
exclude = set()
for nr in removed_reaction_notifications:
reaction = json.loads(str(nr.payload))
reaction = orjson.loads(str(nr.payload))
reaction_id = reaction.get("id")
exclude.add(reaction_id)
for n in new_reaction_notifications:
reaction = json.loads(str(n.payload))
reaction = orjson.loads(str(n.payload))
reaction_id = reaction.get("id")
if (
reaction_id not in exclude

View File

@@ -0,0 +1,25 @@
{
"include": [
"."
],
"exclude": [
"**/node_modules",
"**/__pycache__",
"**/.*"
],
"defineConstant": {
"DEBUG": true
},
"venvPath": ".",
"venv": ".venv",
"pythonVersion": "3.11",
"typeCheckingMode": "strict",
"reportMissingImports": true,
"reportMissingTypeStubs": false,
"reportUnknownMemberType": false,
"reportUnknownParameterType": false,
"reportUnknownVariableType": false,
"reportUnknownArgumentType": false,
"reportPrivateUsage": false,
"reportUntypedFunctionDecorator": false
}

View File

@@ -15,11 +15,21 @@ from utils.logger import root_logger as logger
async def get_my_rates_comments(_, info, comments: list[int]) -> list[dict]:
"""
Получение реакций пользователя на комментарии
Args:
info: Контекст запроса
comments: Список ID комментариев
Returns:
list[dict]: Список словарей с реакциями пользователя на комментарии
Каждый словарь содержит:
- comment_id: ID комментария
- my_rate: Тип реакции (LIKE/DISLIKE)
"""
author_dict = info.context.get("author") if info.context else None
author_id = author_dict.get("id") if author_dict else None
if not author_id:
return {"error": "Author not found"}
return [] # Возвращаем пустой список вместо словаря с ошибкой
# Подзапрос для реакций текущего пользователя
rated_query = (

View File

@@ -97,20 +97,23 @@ def get_reactions_with_stat(q, limit, offset):
def is_featured_author(session, author_id) -> bool:
"""
Check if an author has at least one featured article.
Check if an author has at least one non-deleted featured article.
:param session: Database session.
:param author_id: Author ID.
:return: True if the author has a featured article, else False.
"""
return session.query(
session.query(Shout).where(Shout.authors.any(id=author_id)).filter(Shout.featured_at.is_not(None)).exists()
session.query(Shout)
.where(Shout.authors.any(id=author_id))
.filter(Shout.featured_at.is_not(None), Shout.deleted_at.is_(None))
.exists()
).scalar()
def check_to_feature(session, approver_id, reaction) -> bool:
"""
Make a shout featured if it receives more than 4 votes.
Make a shout featured if it receives more than 4 votes from authors.
:param session: Database session.
:param approver_id: Approver author ID.
@@ -118,46 +121,78 @@ def check_to_feature(session, approver_id, reaction) -> bool:
:return: True if shout should be featured, else False.
"""
if not reaction.reply_to and is_positive(reaction.kind):
approvers = {approver_id}
# Count the number of approvers
# Проверяем, не содержит ли пост более 20% дизлайков
# Если да, то не должен быть featured независимо от количества лайков
if check_to_unfeature(session, reaction):
return False
# Собираем всех авторов, поставивших лайк
author_approvers = set()
reacted_readers = (
session.query(Reaction.created_by)
.filter(Reaction.shout == reaction.shout, is_positive(Reaction.kind), Reaction.deleted_at.is_(None))
.filter(
Reaction.shout == reaction.shout,
is_positive(Reaction.kind),
# Рейтинги (LIKE, DISLIKE) физически удаляются, поэтому фильтр deleted_at не нужен
)
.distinct()
.all()
)
for reader_id in reacted_readers:
# Добавляем текущего одобряющего
approver = session.query(Author).filter(Author.id == approver_id).first()
if approver and is_featured_author(session, approver_id):
author_approvers.add(approver_id)
# Проверяем, есть ли у реагировавших авторов featured публикации
for (reader_id,) in reacted_readers:
if is_featured_author(session, reader_id):
approvers.add(reader_id)
return len(approvers) > 4
author_approvers.add(reader_id)
# Публикация становится featured при наличии более 4 лайков от авторов
logger.debug(f"Публикация {reaction.shout} имеет {len(author_approvers)} лайков от авторов")
return len(author_approvers) > 4
return False
def check_to_unfeature(session, rejecter_id, reaction) -> bool:
def check_to_unfeature(session, reaction) -> bool:
"""
Unfeature a shout if 20% of reactions are negative.
:param session: Database session.
:param rejecter_id: Rejecter author ID.
:param reaction: Reaction object.
:return: True if shout should be unfeatured, else False.
"""
if not reaction.reply_to and is_negative(reaction.kind):
if not reaction.reply_to:
# Проверяем соотношение дизлайков, даже если текущая реакция не дизлайк
total_reactions = (
session.query(Reaction)
.filter(
Reaction.shout == reaction.shout, Reaction.kind.in_(RATING_REACTIONS), Reaction.deleted_at.is_(None)
Reaction.shout == reaction.shout,
Reaction.reply_to.is_(None),
Reaction.kind.in_(RATING_REACTIONS),
# Рейтинги физически удаляются при удалении, поэтому фильтр deleted_at не нужен
)
.count()
)
negative_reactions = (
session.query(Reaction)
.filter(Reaction.shout == reaction.shout, is_negative(Reaction.kind), Reaction.deleted_at.is_(None))
.filter(
Reaction.shout == reaction.shout,
is_negative(Reaction.kind),
Reaction.reply_to.is_(None),
# Рейтинги физически удаляются при удалении, поэтому фильтр deleted_at не нужен
)
.count()
)
return total_reactions > 0 and (negative_reactions / total_reactions) >= 0.2
# Проверяем, составляют ли отрицательные реакции 20% или более от всех реакций
negative_ratio = negative_reactions / total_reactions if total_reactions > 0 else 0
logger.debug(
f"Публикация {reaction.shout}: {negative_reactions}/{total_reactions} отрицательных реакций ({negative_ratio:.2%})"
)
return total_reactions > 0 and negative_ratio >= 0.2
return False
@@ -196,8 +231,8 @@ async def _create_reaction(session, shout_id: int, is_author: bool, author_id: i
Create a new reaction and perform related actions such as updating counters and notification.
:param session: Database session.
:param info: GraphQL context info.
:param shout: Shout object.
:param shout_id: Shout ID.
:param is_author: Flag indicating if the user is the author of the shout.
:param author_id: Author ID.
:param reaction: Dictionary with reaction data.
:return: Dictionary with created reaction data.
@@ -217,10 +252,14 @@ async def _create_reaction(session, shout_id: int, is_author: bool, author_id: i
# Handle rating
if r.kind in RATING_REACTIONS:
if check_to_unfeature(session, author_id, r):
# Проверяем сначала условие для unfeature (дизлайки имеют приоритет)
if check_to_unfeature(session, r):
set_unfeatured(session, shout_id)
logger.info(f"Публикация {shout_id} потеряла статус featured из-за высокого процента дизлайков")
# Только если не было unfeature, проверяем условие для feature
elif check_to_feature(session, author_id, r):
await set_featured(session, shout_id)
logger.info(f"Публикация {shout_id} получила статус featured благодаря лайкам от авторов")
# Notify creation
await notify_reaction(rdict, "create")
@@ -281,8 +320,10 @@ async def create_reaction(_, info, reaction):
logger.debug(f"Creating reaction with data: {reaction_input}")
logger.debug(f"Author ID: {author_id}, Shout ID: {shout_id}")
if not shout_id or not author_id:
return {"error": "Shout ID and author ID are required to create a reaction."}
if not author_id:
return {"error": "Author ID is required to create a reaction."}
if not shout_id:
return {"error": "Shout ID is required to create a reaction."}
try:
with local_session() as session:
@@ -352,7 +393,7 @@ async def update_reaction(_, info, reaction):
result = session.execute(reaction_query).unique().first()
if result:
r, author, shout, commented_stat, rating_stat = result
r, author, _shout, commented_stat, rating_stat = result
if not r or not author:
return {"error": "Invalid reaction ID or unauthorized"}
@@ -404,15 +445,24 @@ async def delete_reaction(_, info, reaction_id: int):
if r.created_by != author_id and "editor" not in roles:
return {"error": "Access denied"}
if r.kind == ReactionKind.COMMENT.value:
r.deleted_at = int(time.time())
update_author_stat(author.id)
session.add(r)
session.commit()
elif r.kind == ReactionKind.PROPOSE.value:
r.deleted_at = int(time.time())
session.add(r)
session.commit()
# TODO: add more reaction types here
else:
logger.debug(f"{user_id} user removing his #{reaction_id} reaction")
reaction_dict = r.dict()
session.delete(r)
session.commit()
if check_to_unfeature(session, r):
set_unfeatured(session, r.shout)
# Update author stat
if r.kind == ReactionKind.COMMENT.value:
update_author_stat(author.id)
reaction_dict = r.dict()
await notify_reaction(reaction_dict, "delete")
return {"error": None, "reaction": reaction_dict}
@@ -483,7 +533,9 @@ async def load_reactions_by(_, _info, by, limit=50, offset=0):
# Add statistics and apply filters
q = add_reaction_stat_columns(q)
q = apply_reaction_filters(by, q)
q = q.where(Reaction.deleted_at.is_(None))
# Include reactions with deleted_at for building comment trees
# q = q.where(Reaction.deleted_at.is_(None))
# Group and sort
q = q.group_by(Reaction.id, Author.id, Shout.id)
@@ -560,24 +612,22 @@ async def load_shout_comments(_, info, shout: int, limit=50, offset=0):
@query.field("load_comment_ratings")
async def load_comment_ratings(_, info, comment: int, limit=50, offset=0):
"""
Load ratings for a specified comment with pagination and statistics.
Load ratings for a specified comment with pagination.
:param info: GraphQL context info.
:param comment: Comment ID.
:param limit: Number of ratings to load.
:param offset: Pagination offset.
:return: List of reactions.
:return: List of ratings.
"""
q = query_reactions()
q = add_reaction_stat_columns(q)
# Filter, group, sort, limit, offset
q = q.filter(
and_(
Reaction.deleted_at.is_(None),
Reaction.reply_to == comment,
Reaction.kind == ReactionKind.COMMENT.value,
Reaction.kind.in_(RATING_REACTIONS),
)
)
q = q.group_by(Reaction.id, Author.id, Shout.id)
@@ -585,3 +635,186 @@ async def load_comment_ratings(_, info, comment: int, limit=50, offset=0):
# Retrieve and return reactions
return get_reactions_with_stat(q, limit, offset)
@query.field("load_comments_branch")
async def load_comments_branch(
_,
_info,
shout: int,
parent_id: int | None = None,
limit=10,
offset=0,
sort="newest",
children_limit=3,
children_offset=0,
):
"""
Загружает иерархические комментарии с возможностью пагинации корневых и дочерних.
:param info: GraphQL context info.
:param shout: ID статьи.
:param parent_id: ID родительского комментария (None для корневых).
:param limit: Количество комментариев для загрузки.
:param offset: Смещение для пагинации.
:param sort: Порядок сортировки ('newest', 'oldest', 'like').
:param children_limit: Максимальное количество дочерних комментариев.
:param children_offset: Смещение для дочерних комментариев.
:return: Список комментариев с дочерними.
"""
# Создаем базовый запрос
q = query_reactions()
q = add_reaction_stat_columns(q)
# Фильтруем по статье и типу (комментарии)
q = q.filter(
and_(
Reaction.deleted_at.is_(None),
Reaction.shout == shout,
Reaction.kind == ReactionKind.COMMENT.value,
)
)
# Фильтруем по родительскому ID
if parent_id is None:
# Загружаем только корневые комментарии
q = q.filter(Reaction.reply_to.is_(None))
else:
# Загружаем только прямые ответы на указанный комментарий
q = q.filter(Reaction.reply_to == parent_id)
# Сортировка и группировка
q = q.group_by(Reaction.id, Author.id, Shout.id)
# Определяем сортировку
order_by_stmt = None
if sort.lower() == "oldest":
order_by_stmt = asc(Reaction.created_at)
elif sort.lower() == "like":
order_by_stmt = desc("rating_stat")
else: # "newest" по умолчанию
order_by_stmt = desc(Reaction.created_at)
q = q.order_by(order_by_stmt)
# Выполняем запрос для получения комментариев
comments = get_reactions_with_stat(q, limit, offset)
# Если комментарии найдены, загружаем дочерние и количество ответов
if comments:
# Загружаем количество ответов для каждого комментария
await load_replies_count(comments)
# Загружаем дочерние комментарии
await load_first_replies(comments, children_limit, children_offset, sort)
return comments
async def load_replies_count(comments):
"""
Загружает количество ответов для списка комментариев и обновляет поле stat.commented.
:param comments: Список комментариев, для которых нужно загрузить количество ответов.
"""
if not comments:
return
comment_ids = [comment["id"] for comment in comments]
# Запрос для подсчета количества ответов
q = (
select(Reaction.reply_to.label("parent_id"), func.count().label("count"))
.where(
and_(
Reaction.reply_to.in_(comment_ids),
Reaction.deleted_at.is_(None),
Reaction.kind == ReactionKind.COMMENT.value,
)
)
.group_by(Reaction.reply_to)
)
# Выполняем запрос
with local_session() as session:
result = session.execute(q).fetchall()
# Создаем словарь {parent_id: count}
replies_count = {row[0]: row[1] for row in result}
# Добавляем значения в комментарии
for comment in comments:
if "stat" not in comment:
comment["stat"] = {}
# Обновляем счетчик комментариев в stat
comment["stat"]["commented"] = replies_count.get(comment["id"], 0)
async def load_first_replies(comments, limit, offset, sort="newest"):
"""
Загружает первые N ответов для каждого комментария.
:param comments: Список комментариев, для которых нужно загрузить ответы.
:param limit: Максимальное количество ответов для каждого комментария.
:param offset: Смещение для пагинации дочерних комментариев.
:param sort: Порядок сортировки ответов.
"""
if not comments or limit <= 0:
return
# Собираем ID комментариев
comment_ids = [comment["id"] for comment in comments]
# Базовый запрос для загрузки ответов
q = query_reactions()
q = add_reaction_stat_columns(q)
# Фильтрация: только ответы на указанные комментарии
q = q.filter(
and_(
Reaction.reply_to.in_(comment_ids),
Reaction.deleted_at.is_(None),
Reaction.kind == ReactionKind.COMMENT.value,
)
)
# Группировка
q = q.group_by(Reaction.id, Author.id, Shout.id)
# Определяем сортировку
order_by_stmt = None
if sort.lower() == "oldest":
order_by_stmt = asc(Reaction.created_at)
elif sort.lower() == "like":
order_by_stmt = desc("rating_stat")
else: # "newest" по умолчанию
order_by_stmt = desc(Reaction.created_at)
q = q.order_by(order_by_stmt, Reaction.reply_to)
# Выполняем запрос
replies = get_reactions_with_stat(q)
# Группируем ответы по родительским ID
replies_by_parent = {}
for reply in replies:
parent_id = reply.get("reply_to")
if parent_id not in replies_by_parent:
replies_by_parent[parent_id] = []
replies_by_parent[parent_id].append(reply)
# Добавляем ответы к соответствующим комментариям с учетом смещения и лимита
for comment in comments:
comment_id = comment["id"]
if comment_id in replies_by_parent:
parent_replies = replies_by_parent[comment_id]
# Применяем смещение и лимит
comment["first_replies"] = parent_replies[offset : offset + limit]
else:
comment["first_replies"] = []
# Загружаем количество ответов для дочерних комментариев
all_replies = [reply for replies in replies_by_parent.values() for reply in replies]
if all_replies:
await load_replies_count(all_replies)

View File

@@ -1,5 +1,4 @@
import json
import orjson
from graphql import GraphQLResolveInfo
from sqlalchemy import and_, nulls_last, text
from sqlalchemy.orm import aliased
@@ -9,7 +8,6 @@ from orm.author import Author
from orm.reaction import Reaction, ReactionKind
from orm.shout import Shout, ShoutAuthor, ShoutTopic
from orm.topic import Topic
from services.auth import login_accepted
from services.db import json_array_builder, json_builder, local_session
from services.schema import query
from services.search import search_text
@@ -63,10 +61,11 @@ def query_with_stat(info):
Добавляет подзапрос статистики
"""
q = (
select(Shout)
.where(and_(Shout.published_at.is_not(None), Shout.deleted_at.is_(None)))
.join(Author, Author.id == Shout.created_by)
q = select(Shout).filter(
and_(
Shout.published_at.is_not(None), # Проверяем published_at
Shout.deleted_at.is_(None), # Проверяем deleted_at
)
)
# Главный автор
@@ -188,12 +187,15 @@ def get_shouts_with_links(info, q, limit=20, offset=0):
"""
shouts = []
try:
# logger.info(f"Starting get_shouts_with_links with limit={limit}, offset={offset}")
q = q.limit(limit).offset(offset)
with local_session() as session:
shouts_result = session.execute(q).all()
# logger.info(f"Got {len(shouts_result) if shouts_result else 0} shouts from query")
if not shouts_result:
logger.warning("No shouts found in query result")
return []
for idx, row in enumerate(shouts_result):
@@ -201,6 +203,7 @@ def get_shouts_with_links(info, q, limit=20, offset=0):
shout = None
if hasattr(row, "Shout"):
shout = row.Shout
# logger.debug(f"Processing shout#{shout.id} at index {idx}")
if shout:
shout_id = int(f"{shout.id}")
shout_dict = shout.dict()
@@ -218,42 +221,68 @@ def get_shouts_with_links(info, q, limit=20, offset=0):
if has_field(info, "stat"):
stat = {}
if isinstance(row.stat, str):
stat = json.loads(row.stat)
stat = orjson.loads(row.stat)
elif isinstance(row.stat, dict):
stat = row.stat
viewed = ViewedStorage.get_shout(shout_id=shout_id) or 0
shout_dict["stat"] = {**stat, "viewed": viewed, "commented": stat.get("comments_count", 0)}
if has_field(info, "main_topic") and hasattr(row, "main_topic"):
shout_dict["main_topic"] = (
json.loads(row.main_topic) if isinstance(row.main_topic, str) else row.main_topic
# Обработка main_topic и topics
topics = None
if has_field(info, "topics") and hasattr(row, "topics"):
topics = orjson.loads(row.topics) if isinstance(row.topics, str) else row.topics
# logger.debug(f"Shout#{shout_id} topics: {topics}")
shout_dict["topics"] = topics
if has_field(info, "main_topic"):
main_topic = None
if hasattr(row, "main_topic"):
# logger.debug(f"Raw main_topic for shout#{shout_id}: {row.main_topic}")
main_topic = (
orjson.loads(row.main_topic) if isinstance(row.main_topic, str) else row.main_topic
)
# logger.debug(f"Parsed main_topic for shout#{shout_id}: {main_topic}")
if not main_topic and topics and len(topics) > 0:
# logger.info(f"No main_topic found for shout#{shout_id}, using first topic from list")
main_topic = {
"id": topics[0]["id"],
"title": topics[0]["title"],
"slug": topics[0]["slug"],
"is_main": True,
}
elif not main_topic:
logger.warning(f"No main_topic and no topics found for shout#{shout_id}")
main_topic = {"id": 0, "title": "no topic", "slug": "notopic", "is_main": True}
shout_dict["main_topic"] = main_topic
# logger.debug(f"Final main_topic for shout#{shout_id}: {main_topic}")
if has_field(info, "authors") and hasattr(row, "authors"):
shout_dict["authors"] = (
json.loads(row.authors) if isinstance(row.authors, str) else row.authors
orjson.loads(row.authors) if isinstance(row.authors, str) else row.authors
)
if has_field(info, "topics") and hasattr(row, "topics"):
shout_dict["topics"] = json.loads(row.topics) if isinstance(row.topics, str) else row.topics
if has_field(info, "media") and shout.media:
# Обработка поля media
media_data = shout.media
if isinstance(media_data, str):
try:
media_data = json.loads(media_data)
except json.JSONDecodeError:
media_data = orjson.loads(media_data)
except orjson.JSONDecodeError:
media_data = []
shout_dict["media"] = [media_data] if isinstance(media_data, dict) else media_data
shouts.append(shout_dict)
except Exception as row_error:
logger.error(f"Ошибка при обработке строки {idx}: {row_error}", exc_info=True)
logger.error(f"Error processing row {idx}: {row_error}", exc_info=True)
continue
except Exception as e:
logger.error(f"Фатальная ошибка в get_shouts_with_links: {e}", exc_info=True)
logger.error(f"Fatal error in get_shouts_with_links: {e}", exc_info=True)
raise
finally:
logger.info(f"Returning {len(shouts)} shouts from get_shouts_with_links")
return shouts
@@ -290,7 +319,6 @@ def apply_filters(q, filters):
@query.field("get_shout")
@login_accepted
async def get_shout(_, info: GraphQLResolveInfo, slug="", shout_id=0):
"""
Получение публикации по slug или id.

View File

@@ -63,47 +63,35 @@ def add_author_stat_columns(q):
:param q: SQL-запрос для получения авторов.
:return: Запрос с добавленными колонками статистики.
"""
# Алиасирование таблиц для предотвращения конфликтов имен
aliased_shout_author = aliased(ShoutAuthor)
aliased_shout = aliased(Shout)
aliased_author_follower = aliased(AuthorFollower)
# Подзапрос для подсчета публикаций
shouts_subq = (
select(func.count(distinct(Shout.id)))
.select_from(ShoutAuthor)
.join(Shout, and_(Shout.id == ShoutAuthor.shout, Shout.deleted_at.is_(None)))
.where(ShoutAuthor.author == Author.id)
.scalar_subquery()
)
# Применение фильтров и добавление колонок статистики
# Подзапрос для подсчета подписчиков
followers_subq = (
select(func.count(distinct(AuthorFollower.follower)))
.where(AuthorFollower.author == Author.id)
.scalar_subquery()
)
# Основной запрос
q = (
q.select_from(Author)
.join(
aliased_shout_author,
aliased_shout_author.author == Author.id,
.add_columns(shouts_subq.label("shouts_stat"), followers_subq.label("followers_stat"))
.group_by(Author.id)
)
.join(
aliased_shout,
and_(
aliased_shout.id == aliased_shout_author.shout,
aliased_shout.deleted_at.is_(None),
),
)
.add_columns(
func.count(distinct(aliased_shout.id)).label("shouts_stat")
) # Подсчет уникальных публикаций автора
)
# Добавляем количество подписчиков автора
q = q.outerjoin(aliased_author_follower, aliased_author_follower.author == Author.id).add_columns(
func.count(distinct(aliased_author_follower.follower)).label("followers_stat")
)
# Группировка по идентификатору автора
q = q.group_by(Author.id)
return q
def get_topic_shouts_stat(topic_id: int) -> int:
"""
Получает количество публикаций для указанной темы.
:param topic_id: Идентификатор темы.
:return: Количество уникальных публикаций для темы.
Получает количество опубликованных постов для темы
"""
q = (
select(func.count(distinct(ShoutTopic.shout)))
@@ -116,7 +104,7 @@ def get_topic_shouts_stat(topic_id: int) -> int:
)
)
)
# Выполнение запроса и получение результата
with local_session() as session:
result = session.execute(q).first()
return result[0] if result else 0
@@ -198,10 +186,7 @@ def get_topic_comments_stat(topic_id: int) -> int:
def get_author_shouts_stat(author_id: int) -> int:
"""
Получает количество публикаций для указанного автора.
:param author_id: Идентификатор автора.
:return: Количество уникальных публикаций автора.
Получает количество опубликованных постов для автора
"""
aliased_shout_author = aliased(ShoutAuthor)
aliased_shout = aliased(Shout)
@@ -214,6 +199,7 @@ def get_author_shouts_stat(author_id: int) -> int:
and_(
aliased_shout_author.author == author_id,
aliased_shout.published_at.is_not(None),
aliased_shout.deleted_at.is_(None), # Добавляем проверку на удаление
)
)
)

View File

@@ -1,44 +1,240 @@
from sqlalchemy import select
from sqlalchemy import desc, select, text
from cache.cache import (
cache_topic,
cached_query,
get_cached_topic_authors,
get_cached_topic_by_slug,
get_cached_topic_followers,
invalidate_cache_by_prefix,
)
from cache.memorycache import cache_region
from orm.author import Author
from orm.topic import Topic
from resolvers.stat import get_with_stat
from services.auth import login_required
from services.db import local_session
from services.redis import redis
from services.schema import mutation, query
from utils.logger import root_logger as logger
# Вспомогательная функция для получения всех тем без статистики
async def get_all_topics():
"""
Получает все темы без статистики.
Используется для случаев, когда нужен полный список тем без дополнительной информации.
Returns:
list: Список всех тем без статистики
"""
cache_key = "topics:all:basic"
# Функция для получения всех тем из БД
async def fetch_all_topics():
logger.debug("Получаем список всех тем из БД и кешируем результат")
with local_session() as session:
# Запрос на получение базовой информации о темах
topics_query = select(Topic)
topics = session.execute(topics_query).scalars().all()
# Преобразуем темы в словари
return [topic.dict() for topic in topics]
# Используем универсальную функцию для кеширования запросов
return await cached_query(cache_key, fetch_all_topics)
# Вспомогательная функция для получения тем со статистикой с пагинацией
async def get_topics_with_stats(limit=100, offset=0, community_id=None, by=None):
"""
Получает темы со статистикой с пагинацией.
Args:
limit: Максимальное количество возвращаемых тем
offset: Смещение для пагинации
community_id: Опциональный ID сообщества для фильтрации
by: Опциональный параметр сортировки
Returns:
list: Список тем с их статистикой
"""
# Формируем ключ кеша с помощью универсальной функции
cache_key = f"topics:stats:limit={limit}:offset={offset}:community_id={community_id}"
# Функция для получения тем из БД
async def fetch_topics_with_stats():
logger.debug(f"Выполняем запрос на получение тем со статистикой: limit={limit}, offset={offset}")
with local_session() as session:
# Базовый запрос для получения тем
base_query = select(Topic)
# Добавляем фильтр по сообществу, если указан
if community_id:
base_query = base_query.where(Topic.community == community_id)
# Применяем сортировку на основе параметра by
if by:
if isinstance(by, dict):
# Обработка словаря параметров сортировки
for field, direction in by.items():
column = getattr(Topic, field, None)
if column:
if direction.lower() == "desc":
base_query = base_query.order_by(desc(column))
else:
base_query = base_query.order_by(column)
elif by == "popular":
# Сортировка по популярности (количеству публикаций)
# Примечание: это требует дополнительного запроса или подзапроса
base_query = base_query.order_by(
desc(Topic.id)
) # Временно, нужно заменить на proper implementation
else:
# По умолчанию сортируем по ID в обратном порядке
base_query = base_query.order_by(desc(Topic.id))
else:
# По умолчанию сортируем по ID в обратном порядке
base_query = base_query.order_by(desc(Topic.id))
# Применяем лимит и смещение
base_query = base_query.limit(limit).offset(offset)
# Получаем темы
topics = session.execute(base_query).scalars().all()
topic_ids = [topic.id for topic in topics]
if not topic_ids:
return []
# Запрос на получение статистики по публикациям для выбранных тем
shouts_stats_query = f"""
SELECT st.topic, COUNT(DISTINCT s.id) as shouts_count
FROM shout_topic st
JOIN shout s ON st.shout = s.id AND s.deleted_at IS NULL
WHERE st.topic IN ({",".join(map(str, topic_ids))})
GROUP BY st.topic
"""
shouts_stats = {row[0]: row[1] for row in session.execute(text(shouts_stats_query))}
# Запрос на получение статистики по подписчикам для выбранных тем
followers_stats_query = f"""
SELECT topic, COUNT(DISTINCT follower) as followers_count
FROM topic_followers
WHERE topic IN ({",".join(map(str, topic_ids))})
GROUP BY topic
"""
followers_stats = {row[0]: row[1] for row in session.execute(text(followers_stats_query))}
# Формируем результат с добавлением статистики
result = []
for topic in topics:
topic_dict = topic.dict()
topic_dict["stat"] = {
"shouts": shouts_stats.get(topic.id, 0),
"followers": followers_stats.get(topic.id, 0),
}
result.append(topic_dict)
# Кешируем каждую тему отдельно для использования в других функциях
await cache_topic(topic_dict)
return result
# Используем универсальную функцию для кеширования запросов
return await cached_query(cache_key, fetch_topics_with_stats)
# Функция для инвалидации кеша тем
async def invalidate_topics_cache(topic_id=None):
"""
Инвалидирует кеши тем при изменении данных.
Args:
topic_id: Опциональный ID темы для точечной инвалидации.
Если не указан, инвалидируются все кеши тем.
"""
if topic_id:
# Точечная инвалидация конкретной темы
logger.debug(f"Инвалидация кеша для темы #{topic_id}")
specific_keys = [
f"topic:id:{topic_id}",
f"topic:authors:{topic_id}",
f"topic:followers:{topic_id}",
f"topic_shouts_{topic_id}",
]
# Получаем slug темы, если есть
with local_session() as session:
topic = session.query(Topic).filter(Topic.id == topic_id).first()
if topic and topic.slug:
specific_keys.append(f"topic:slug:{topic.slug}")
# Удаляем конкретные ключи
for key in specific_keys:
try:
await redis.execute("DEL", key)
logger.debug(f"Удален ключ кеша {key}")
except Exception as e:
logger.error(f"Ошибка при удалении ключа {key}: {e}")
# Также ищем и удаляем ключи коллекций, содержащих данные об этой теме
collection_keys = await redis.execute("KEYS", "topics:stats:*")
if collection_keys:
await redis.execute("DEL", *collection_keys)
logger.debug(f"Удалено {len(collection_keys)} коллекционных ключей тем")
else:
# Общая инвалидация всех кешей тем
logger.debug("Полная инвалидация кеша тем")
await invalidate_cache_by_prefix("topics")
# Запрос на получение всех тем
@query.field("get_topics_all")
def get_topics_all(_, _info):
cache_key = "get_topics_all" # Ключ для кеша
async def get_topics_all(_, _info):
"""
Получает список всех тем без статистики.
@cache_region.cache_on_arguments(cache_key)
def _get_topics_all():
topics_query = select(Topic)
return get_with_stat(topics_query) # Получение тем с учетом статистики
Returns:
list: Список всех тем
"""
return await get_all_topics()
return _get_topics_all()
# Запрос на получение тем с пагинацией и статистикой
@query.field("get_topics_paginated")
async def get_topics_paginated(_, _info, limit=100, offset=0, by=None):
"""
Получает список тем с пагинацией и статистикой.
Args:
limit: Максимальное количество возвращаемых тем
offset: Смещение для пагинации
by: Опциональные параметры сортировки
Returns:
list: Список тем с их статистикой
"""
return await get_topics_with_stats(limit, offset, None, by)
# Запрос на получение тем по сообществу
@query.field("get_topics_by_community")
def get_topics_by_community(_, _info, community_id: int):
cache_key = f"get_topics_by_community_{community_id}" # Ключ для кеша
async def get_topics_by_community(_, _info, community_id: int, limit=100, offset=0, by=None):
"""
Получает список тем, принадлежащих указанному сообществу с пагинацией и статистикой.
@cache_region.cache_on_arguments(cache_key)
def _get_topics_by_community():
topics_by_community_query = select(Topic).where(Topic.community == community_id)
return get_with_stat(topics_by_community_query)
Args:
community_id: ID сообщества
limit: Максимальное количество возвращаемых тем
offset: Смещение для пагинации
by: Опциональные параметры сортировки
return _get_topics_by_community()
Returns:
list: Список тем с их статистикой
"""
return await get_topics_with_stats(limit, offset, community_id, by)
# Запрос на получение тем по автору
@@ -66,31 +262,43 @@ async def get_topic(_, _info, slug: str):
# Мутация для создания новой темы
@mutation.field("create_topic")
@login_required
async def create_topic(_, _info, inp):
async def create_topic(_, _info, topic_input):
with local_session() as session:
# TODO: проверить права пользователя на создание темы для конкретного сообщества
# и разрешение на создание
new_topic = Topic(**inp)
new_topic = Topic(**topic_input)
session.add(new_topic)
session.commit()
# Инвалидируем кеш всех тем
await invalidate_topics_cache()
return {"topic": new_topic}
# Мутация для обновления темы
@mutation.field("update_topic")
@login_required
async def update_topic(_, _info, inp):
slug = inp["slug"]
async def update_topic(_, _info, topic_input):
slug = topic_input["slug"]
with local_session() as session:
topic = session.query(Topic).filter(Topic.slug == slug).first()
if not topic:
return {"error": "topic not found"}
else:
Topic.update(topic, inp)
old_slug = topic.slug
Topic.update(topic, topic_input)
session.add(topic)
session.commit()
# Инвалидируем кеш только для этой конкретной темы
await invalidate_topics_cache(topic.id)
# Если slug изменился, удаляем старый ключ
if old_slug != topic.slug:
await redis.execute("DEL", f"topic:slug:{old_slug}")
logger.debug(f"Удален ключ кеша для старого slug: {old_slug}")
return {"topic": topic}
@@ -111,6 +319,11 @@ async def delete_topic(_, info, slug: str):
session.delete(t)
session.commit()
# Инвалидируем кеш всех тем и конкретной темы
await invalidate_topics_cache()
await redis.execute("DEL", f"topic:slug:{slug}")
await redis.execute("DEL", f"topic:id:{t.id}")
return {}
return {"error": "access denied"}

View File

@@ -1,15 +1,47 @@
input ShoutInput {
slug: String
input MediaItemInput {
url: String
title: String
body: String
source: String
pic: String
date: String
genre: String
artist: String
lyrics: String
}
input AuthorInput {
id: Int!
slug: String
}
input TopicInput {
id: Int
slug: String!
title: String
body: String
pic: String
}
input DraftInput {
id: Int
# no created_at, updated_at, deleted_at, updated_by, deleted_by
layout: String
shout_id: Int # Changed from shout: Shout
author_ids: [Int!] # Changed from authors: [Author]
topic_ids: [Int!] # Changed from topics: [Topic]
main_topic_id: Int # Changed from main_topic: Topic
media: [MediaItemInput] # Changed to use MediaItemInput
lead: String
description: String
layout: String
media: String
topics: [TopicInput]
community: Int
subtitle: String
lang: String
seo: String
body: String
title: String
slug: String
cover: String
cover_caption: String
}
input ProfileInput {
@@ -21,14 +53,6 @@ input ProfileInput {
about: String
}
input TopicInput {
id: Int
slug: String!
title: String
body: String
pic: String
}
input ReactionInput {
id: Int
kind: ReactionKind!

View File

@@ -3,18 +3,23 @@ type Mutation {
rate_author(rated_slug: String!, value: Int!): CommonResult!
update_author(profile: ProfileInput!): CommonResult!
# editor
create_shout(inp: ShoutInput!): CommonResult!
update_shout(shout_id: Int!, shout_input: ShoutInput, publish: Boolean): CommonResult!
delete_shout(shout_id: Int!): CommonResult!
# draft
create_draft(draft_input: DraftInput!): CommonResult!
update_draft(draft_id: Int!, draft_input: DraftInput!): CommonResult!
delete_draft(draft_id: Int!): CommonResult!
# publication
publish_shout(shout_id: Int!): CommonResult!
publish_draft(draft_id: Int!): CommonResult!
unpublish_draft(draft_id: Int!): CommonResult!
unpublish_shout(shout_id: Int!): CommonResult!
# follower
follow(what: FollowingEntity!, slug: String!): AuthorFollowsResult!
unfollow(what: FollowingEntity!, slug: String!): AuthorFollowsResult!
# topic
create_topic(input: TopicInput!): CommonResult!
update_topic(input: TopicInput!): CommonResult!
create_topic(topic_input: TopicInput!): CommonResult!
update_topic(topic_input: TopicInput!): CommonResult!
delete_topic(slug: String!): CommonResult!
# reaction
@@ -40,7 +45,7 @@ type Mutation {
# community
join_community(slug: String!): CommonResult!
leave_community(slug: String!): CommonResult!
create_community(input: CommunityInput!): CommonResult!
update_community(input: CommunityInput!): CommonResult!
create_community(community_input: CommunityInput!): CommonResult!
update_community(community_input: CommunityInput!): CommonResult!
delete_community(slug: String!): CommonResult!
}

View File

@@ -26,6 +26,9 @@ type Query {
load_shout_ratings(shout: Int!, limit: Int, offset: Int): [Reaction]
load_comment_ratings(comment: Int!, limit: Int, offset: Int): [Reaction]
# branched comments pagination
load_comments_branch(shout: Int!, parent_id: Int, limit: Int, offset: Int, sort: ReactionSort, children_limit: Int, children_offset: Int): [Reaction]
# reader
get_shout(slug: String, shout_id: Int): Shout
load_shouts_by(options: LoadShoutsOptions): [Shout]
@@ -51,6 +54,7 @@ type Query {
# editor
get_my_shout(shout_id: Int!): CommonResult!
get_shouts_drafts: CommonResult!
load_drafts: CommonResult!
# topic
get_topic(slug: String!): Topic

View File

@@ -55,6 +55,7 @@ type Reaction {
stat: Stat
oid: String
# old_thread: String
first_replies: [Reaction]
}
type MediaItem {
@@ -100,12 +101,40 @@ type Shout {
deleted_at: Int
version_of: Shout # TODO: use version_of somewhere
draft: Draft
media: [MediaItem]
stat: Stat
score: Float
}
type Draft {
id: Int!
created_at: Int!
created_by: Author!
layout: String
slug: String
title: String
subtitle: String
lead: String
description: String
body: String
media: [MediaItem]
cover: String
cover_caption: String
lang: String
seo: String
# auto
updated_at: Int
deleted_at: Int
updated_by: Author
deleted_by: Author
authors: [Author]
topics: [Topic]
}
type Stat {
rating: Int
commented: Int
@@ -163,6 +192,8 @@ type Topic {
type CommonResult {
error: String
drafts: [Draft]
draft: Draft
slugs: [String]
shout: Shout
shouts: [Shout]

View File

@@ -1,34 +0,0 @@
import sys
from pathlib import Path
from granian.constants import Interfaces
from granian.log import LogLevels
from granian.server import Granian
from settings import PORT
from utils.logger import root_logger as logger
if __name__ == "__main__":
logger.info("started")
try:
granian_instance = Granian(
"main:app",
address="0.0.0.0",
port=PORT,
interface=Interfaces.ASGI,
threads=4,
websockets=False,
log_level=LogLevels.debug,
backlog=2048,
)
if "dev" in sys.argv:
logger.info("dev mode, building ssl context")
granian_instance.build_ssl_context(cert=Path("localhost.pem"), key=Path("localhost-key.pem"), password=None)
granian_instance.serve()
except Exception as error:
logger.error(f"Granian error: {error}", exc_info=True)
raise
finally:
logger.info("stopped")

View File

@@ -1,5 +1,5 @@
from dataclasses import dataclass
from typing import Any, Dict, List, Optional
from typing import List, Optional
from orm.author import Author
from orm.community import Community
@@ -22,33 +22,3 @@ class CommonResult:
topics: Optional[List[Topic]] = None
community: Optional[Community] = None
communities: Optional[List[Community]] = None
@classmethod
def ok(cls, data: Dict[str, Any]) -> "CommonResult":
"""
Создает успешный результат.
Args:
data: Словарь с данными для включения в результат.
Returns:
CommonResult: Экземпляр с предоставленными данными.
"""
result = cls()
for key, value in data.items():
if hasattr(result, key):
setattr(result, key, value)
return result
@classmethod
def error(cls, message: str):
"""
Create an error result.
Args:
message: The error message.
Returns:
CommonResult: An instance with the error message.
"""
return cls(error=message)

View File

@@ -1,14 +1,25 @@
import json
import math
import time
import traceback
import warnings
from typing import Any, Callable, Dict, TypeVar
import orjson
import sqlalchemy
from sqlalchemy import JSON, Column, Engine, Integer, create_engine, event, exc, func, inspect
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import Session, configure_mappers
from sqlalchemy import (
JSON,
Column,
Engine,
Index,
Integer,
create_engine,
event,
exc,
func,
inspect,
text,
)
from sqlalchemy.orm import Session, configure_mappers, declarative_base
from sqlalchemy.sql.schema import Table
from settings import DB_URL
@@ -47,6 +58,82 @@ def create_table_if_not_exists(engine, table):
logger.info(f"Table '{table.__tablename__}' ok.")
def sync_indexes():
"""
Синхронизирует индексы в БД с индексами, определенными в моделях SQLAlchemy.
Создает недостающие индексы, если они определены в моделях, но отсутствуют в БД.
Использует pg_catalog для PostgreSQL для получения списка существующих индексов.
"""
if not DB_URL.startswith("postgres"):
logger.warning("Функция sync_indexes поддерживается только для PostgreSQL.")
return
logger.info("Начинаем синхронизацию индексов в базе данных...")
# Получаем все существующие индексы в БД
with local_session() as session:
existing_indexes_query = text("""
SELECT
t.relname AS table_name,
i.relname AS index_name
FROM
pg_catalog.pg_class i
JOIN
pg_catalog.pg_index ix ON ix.indexrelid = i.oid
JOIN
pg_catalog.pg_class t ON t.oid = ix.indrelid
JOIN
pg_catalog.pg_namespace n ON n.oid = i.relnamespace
WHERE
i.relkind = 'i'
AND n.nspname = 'public'
AND t.relkind = 'r'
ORDER BY
t.relname, i.relname;
""")
existing_indexes = {row[1].lower() for row in session.execute(existing_indexes_query)}
logger.debug(f"Найдено {len(existing_indexes)} существующих индексов в БД")
# Проверяем каждую модель и её индексы
for _model_name, model_class in REGISTRY.items():
if hasattr(model_class, "__table__") and hasattr(model_class, "__table_args__"):
table_args = model_class.__table_args__
# Если table_args - это кортеж, ищем в нём объекты Index
if isinstance(table_args, tuple):
for arg in table_args:
if isinstance(arg, Index):
index_name = arg.name.lower()
# Проверяем, существует ли индекс в БД
if index_name not in existing_indexes:
logger.info(
f"Создаем отсутствующий индекс {index_name} для таблицы {model_class.__tablename__}"
)
# Создаем индекс если он отсутствует
try:
arg.create(engine)
logger.info(f"Индекс {index_name} успешно создан")
except Exception as e:
logger.error(f"Ошибка при создании индекса {index_name}: {e}")
else:
logger.debug(f"Индекс {index_name} уже существует")
# Анализируем таблицы для оптимизации запросов
for model_name, model_class in REGISTRY.items():
if hasattr(model_class, "__tablename__"):
try:
session.execute(text(f"ANALYZE {model_class.__tablename__}"))
logger.debug(f"Таблица {model_class.__tablename__} проанализирована")
except Exception as e:
logger.error(f"Ошибка при анализе таблицы {model_class.__tablename__}: {e}")
logger.info("Синхронизация индексов завершена.")
# noinspection PyUnusedLocal
def local_session(src=""):
return Session(bind=engine, expire_on_commit=False)
@@ -75,8 +162,8 @@ class Base(declarative_base()):
# Check if the value is JSON and decode it if necessary
if isinstance(value, (str, bytes)) and isinstance(self.__table__.columns[column_name].type, JSON):
try:
data[column_name] = json.loads(value)
except (TypeError, json.JSONDecodeError) as e:
data[column_name] = orjson.loads(value)
except (TypeError, orjson.JSONDecodeError) as e:
logger.error(f"Error decoding JSON for column '{column_name}': {e}")
data[column_name] = value
else:

View File

@@ -1,4 +1,4 @@
import json
import orjson
from orm.notification import Notification
from services.db import local_session
@@ -18,7 +18,7 @@ async def notify_reaction(reaction, action: str = "create"):
data = {"payload": reaction, "action": action}
try:
save_notification(action, channel_name, data.get("payload"))
await redis.publish(channel_name, json.dumps(data))
await redis.publish(channel_name, orjson.dumps(data))
except Exception as e:
logger.error(f"Failed to publish to channel {channel_name}: {e}")
@@ -28,7 +28,7 @@ async def notify_shout(shout, action: str = "update"):
data = {"payload": shout, "action": action}
try:
save_notification(action, channel_name, data.get("payload"))
await redis.publish(channel_name, json.dumps(data))
await redis.publish(channel_name, orjson.dumps(data))
except Exception as e:
logger.error(f"Failed to publish to channel {channel_name}: {e}")
@@ -43,7 +43,7 @@ async def notify_follower(follower: dict, author_id: int, action: str = "follow"
save_notification(action, channel_name, data.get("payload"))
# Convert data to JSON string
json_data = json.dumps(data)
json_data = orjson.dumps(data)
# Ensure the data is not empty before publishing
if json_data:

170
services/pretopic.py Normal file
View File

@@ -0,0 +1,170 @@
import concurrent.futures
from typing import Dict, List, Tuple
from txtai.embeddings import Embeddings
from services.logger import root_logger as logger
class TopicClassifier:
def __init__(self, shouts_by_topic: Dict[str, str], publications: List[Dict[str, str]]):
"""
Инициализация классификатора тем и поиска публикаций.
Args:
shouts_by_topic: Словарь {тема: текст_всех_публикаций}
publications: Список публикаций с полями 'id', 'title', 'text'
"""
self.shouts_by_topic = shouts_by_topic
self.topics = list(shouts_by_topic.keys())
self.publications = publications
self.topic_embeddings = None # Для классификации тем
self.search_embeddings = None # Для поиска публикаций
self._initialization_future = None
self._executor = concurrent.futures.ThreadPoolExecutor(max_workers=1)
def initialize(self) -> None:
"""
Асинхронная инициализация векторных представлений.
"""
if self._initialization_future is None:
self._initialization_future = self._executor.submit(self._prepare_embeddings)
logger.info("Векторизация текстов начата в фоновом режиме...")
def _prepare_embeddings(self) -> None:
"""
Подготавливает векторные представления для тем и поиска.
"""
logger.info("Начинается подготовка векторных представлений...")
# Модель для русского языка
# TODO: model local caching
model_path = "sentence-transformers/paraphrase-multilingual-mpnet-base-v2"
# Инициализируем embeddings для классификации тем
self.topic_embeddings = Embeddings(path=model_path)
topic_documents = [(topic, text) for topic, text in self.shouts_by_topic.items()]
self.topic_embeddings.index(topic_documents)
# Инициализируем embeddings для поиска публикаций
self.search_embeddings = Embeddings(path=model_path)
search_documents = [(str(pub["id"]), f"{pub['title']} {pub['text']}") for pub in self.publications]
self.search_embeddings.index(search_documents)
logger.info("Подготовка векторных представлений завершена.")
def predict_topic(self, text: str) -> Tuple[float, str]:
"""
Предсказывает тему для заданного текста из известного набора тем.
Args:
text: Текст для классификации
Returns:
Tuple[float, str]: (уверенность, тема)
"""
if not self.is_ready():
logger.error("Векторные представления не готовы. Вызовите initialize() и дождитесь завершения.")
return 0.0, "unknown"
try:
# Ищем наиболее похожую тему
results = self.topic_embeddings.search(text, 1)
if not results:
return 0.0, "unknown"
score, topic = results[0]
return float(score), topic
except Exception as e:
logger.error(f"Ошибка при определении темы: {str(e)}")
return 0.0, "unknown"
def search_similar(self, query: str, limit: int = 5) -> List[Dict[str, any]]:
"""
Ищет публикации похожие на поисковый запрос.
Args:
query: Поисковый запрос
limit: Максимальное количество результатов
Returns:
List[Dict]: Список найденных публикаций с оценкой релевантности
"""
if not self.is_ready():
logger.error("Векторные представления не готовы. Вызовите initialize() и дождитесь завершения.")
return []
try:
# Ищем похожие публикации
results = self.search_embeddings.search(query, limit)
# Формируем результаты
found_publications = []
for score, pub_id in results:
# Находим публикацию по id
publication = next((pub for pub in self.publications if str(pub["id"]) == pub_id), None)
if publication:
found_publications.append({**publication, "relevance": float(score)})
return found_publications
except Exception as e:
logger.error(f"Ошибка при поиске публикаций: {str(e)}")
return []
def is_ready(self) -> bool:
"""
Проверяет, готовы ли векторные представления.
"""
return self.topic_embeddings is not None and self.search_embeddings is not None
def wait_until_ready(self) -> None:
"""
Ожидает завершения подготовки векторных представлений.
"""
if self._initialization_future:
self._initialization_future.result()
def __del__(self):
"""
Очистка ресурсов при удалении объекта.
"""
if self._executor:
self._executor.shutdown(wait=False)
# Пример использования:
"""
shouts_by_topic = {
"Спорт": "... большой текст со всеми спортивными публикациями ...",
"Технологии": "... большой текст со всеми технологическими публикациями ...",
"Политика": "... большой текст со всеми политическими публикациями ..."
}
publications = [
{
'id': 1,
'title': 'Новый процессор AMD',
'text': 'Компания AMD представила новый процессор...'
},
{
'id': 2,
'title': 'Футбольный матч',
'text': 'Вчера состоялся решающий матч...'
}
]
# Создание классификатора
classifier = TopicClassifier(shouts_by_topic, publications)
classifier.initialize()
classifier.wait_until_ready()
# Определение темы текста
text = "Новый процессор показал высокую производительность"
score, topic = classifier.predict_topic(text)
print(f"Тема: {topic} (уверенность: {score:.4f})")
# Поиск похожих публикаций
query = "процессор AMD производительность"
similar_publications = classifier.search_similar(query, limit=3)
for pub in similar_publications:
print(f"\nНайдена публикация (релевантность: {pub['relevance']:.4f}):")
print(f"Заголовок: {pub['title']}")
print(f"Текст: {pub['text'][:100]}...")
"""

View File

@@ -3,6 +3,7 @@ from asyncio.log import logger
import httpx
from ariadne import MutationType, QueryType
from services.db import create_table_if_not_exists, local_session
from settings import AUTH_URL
query = QueryType()
@@ -37,5 +38,50 @@ async def request_graphql_data(gql, url=AUTH_URL, headers=None):
logger.error(f"{url}: {response.status_code} {response.text}")
except Exception as _e:
import traceback
logger.error(f"request_graphql_data error: {traceback.format_exc()}")
return None
def create_all_tables():
"""Create all database tables in the correct order."""
from orm import author, community, draft, notification, reaction, shout, topic
# Порядок важен - сначала таблицы без внешних ключей, затем зависимые таблицы
models_in_order = [
# user.User, # Базовая таблица auth
author.Author, # Базовая таблица
community.Community, # Базовая таблица
topic.Topic, # Базовая таблица
# Связи для базовых таблиц
author.AuthorFollower, # Зависит от Author
community.CommunityFollower, # Зависит от Community
topic.TopicFollower, # Зависит от Topic
# Черновики (теперь без зависимости от Shout)
draft.Draft, # Зависит только от Author
draft.DraftAuthor, # Зависит от Draft и Author
draft.DraftTopic, # Зависит от Draft и Topic
# Основные таблицы контента
shout.Shout, # Зависит от Author и Draft
shout.ShoutAuthor, # Зависит от Shout и Author
shout.ShoutTopic, # Зависит от Shout и Topic
# Реакции
reaction.Reaction, # Зависит от Author и Shout
shout.ShoutReactionsFollower, # Зависит от Shout и Reaction
# Дополнительные таблицы
author.AuthorRating, # Зависит от Author
notification.Notification, # Зависит от Author
notification.NotificationSeen, # Зависит от Notification
# collection.Collection,
# collection.ShoutCollection,
# invite.Invite
]
with local_session() as session:
for model in models_in_order:
try:
create_table_if_not_exists(session.get_bind(), model)
# logger.info(f"Created or verified table: {model.__tablename__}")
except Exception as e:
logger.error(f"Error creating table {model.__tablename__}: {e}")
raise

View File

@@ -3,6 +3,7 @@ import json
import logging
import os
import orjson
from opensearchpy import OpenSearch
from services.redis import redis
@@ -142,7 +143,7 @@ class SearchService:
# Проверка и обновление структуры индекса, если необходимо
result = self.client.indices.get_mapping(index=self.index_name)
if isinstance(result, str):
result = json.loads(result)
result = orjson.loads(result)
if isinstance(result, dict):
mapping = result.get(self.index_name, {}).get("mappings")
logger.info(f"Найдена структура индексации: {mapping['properties'].keys()}")

View File

@@ -1,13 +1,19 @@
import asyncio
import json
import os
import time
from datetime import datetime, timedelta, timezone
from typing import Dict
import orjson
# ga
from google.analytics.data_v1beta import BetaAnalyticsDataClient
from google.analytics.data_v1beta.types import DateRange, Dimension, Metric, RunReportRequest
from google.analytics.data_v1beta.types import (
DateRange,
Dimension,
Metric,
RunReportRequest,
)
from google.analytics.data_v1beta.types import Filter as GAFilter
from orm.author import Author
@@ -79,7 +85,7 @@ class ViewedStorage:
logger.warn(f" * {viewfile_path} is too old: {self.start_date}")
with open(viewfile_path, "r") as file:
precounted_views = json.load(file)
precounted_views = orjson.loads(file.read())
self.precounted_by_slug.update(precounted_views)
logger.info(f" * {len(precounted_views)} shouts with views was loaded.")

View File

@@ -1,17 +1,29 @@
import sys
from os import environ
PORT = 8000
MODE = "development" if "dev" in sys.argv else "production"
DEV_SERVER_PID_FILE_NAME = "dev-server.pid"
PORT = environ.get("PORT") or 8000
# storages
DB_URL = (
environ.get("DATABASE_URL", "").replace("postgres://", "postgresql://")
or environ.get("DB_URL", "").replace("postgres://", "postgresql://")
or "sqlite:///discoursio-db.sqlite3"
or "sqlite:///discoursio.db"
)
REDIS_URL = environ.get("REDIS_URL") or "redis://127.0.0.1"
AUTH_URL = environ.get("AUTH_URL") or ""
GLITCHTIP_DSN = environ.get("GLITCHTIP_DSN")
DEV_SERVER_PID_FILE_NAME = "dev-server.pid"
MODE = "development" if "dev" in sys.argv else "production"
# debug
GLITCHTIP_DSN = environ.get("GLITCHTIP_DSN")
# authorizer.dev
AUTH_URL = environ.get("AUTH_URL") or "https://auth.discours.io/graphql"
ADMIN_SECRET = environ.get("AUTH_SECRET") or "nothing"
WEBHOOK_SECRET = environ.get("WEBHOOK_SECRET") or "nothing-else"
# own auth
ONETIME_TOKEN_LIFE_SPAN = 60 * 60 * 24 * 3 # 3 days
SESSION_TOKEN_LIFE_SPAN = 60 * 60 * 24 * 30 # 30 days
JWT_ALGORITHM = "HS256"
JWT_SECRET_KEY = environ.get("JWT_SECRET") or "nothing-else-jwt-secret-matters"

60
tests/conftest.py Normal file
View File

@@ -0,0 +1,60 @@
import asyncio
import os
import pytest
from sqlalchemy import create_engine
from sqlalchemy.orm import Session
from starlette.testclient import TestClient
from main import app
from services.db import Base
from services.redis import redis
# Use SQLite for testing
TEST_DB_URL = "sqlite:///test.db"
@pytest.fixture(scope="session")
def event_loop():
"""Create an instance of the default event loop for the test session."""
loop = asyncio.get_event_loop_policy().new_event_loop()
yield loop
loop.close()
@pytest.fixture(scope="session")
def test_engine():
"""Create a test database engine."""
engine = create_engine(TEST_DB_URL)
Base.metadata.create_all(engine)
yield engine
Base.metadata.drop_all(engine)
os.remove("test.db")
@pytest.fixture
def db_session(test_engine):
"""Create a new database session for a test."""
connection = test_engine.connect()
transaction = connection.begin()
session = Session(bind=connection)
yield session
session.close()
transaction.rollback()
connection.close()
@pytest.fixture
async def redis_client():
"""Create a test Redis client."""
await redis.connect()
yield redis
await redis.disconnect()
@pytest.fixture
def test_client():
"""Create a TestClient instance."""
return TestClient(app)

94
tests/test_drafts.py Normal file
View File

@@ -0,0 +1,94 @@
import pytest
from orm.author import Author
from orm.shout import Shout
@pytest.fixture
def test_author(db_session):
"""Create a test author."""
author = Author(name="Test Author", slug="test-author", user="test-user-id")
db_session.add(author)
db_session.commit()
return author
@pytest.fixture
def test_shout(db_session):
"""Create test shout with required fields."""
author = Author(name="Test Author", slug="test-author", user="test-user-id")
db_session.add(author)
db_session.flush()
shout = Shout(
title="Test Shout",
slug="test-shout",
created_by=author.id, # Обязательное поле
body="Test body",
layout="article",
lang="ru",
)
db_session.add(shout)
db_session.commit()
return shout
@pytest.mark.asyncio
async def test_create_shout(test_client, db_session, test_author):
"""Test creating a new shout."""
response = test_client.post(
"/",
json={
"query": """
mutation CreateDraft($draft_input: DraftInput!) {
create_draft(draft_input: $draft_input) {
error
draft {
id
title
body
}
}
}
""",
"variables": {
"input": {
"title": "Test Shout",
"body": "This is a test shout",
}
},
},
)
assert response.status_code == 200
data = response.json()
assert "errors" not in data
assert data["data"]["create_draft"]["draft"]["title"] == "Test Shout"
@pytest.mark.asyncio
async def test_load_drafts(test_client, db_session):
"""Test retrieving a shout."""
response = test_client.post(
"/",
json={
"query": """
query {
load_drafts {
error
drafts {
id
title
body
}
}
}
""",
"variables": {"slug": "test-shout"},
},
)
assert response.status_code == 200
data = response.json()
assert "errors" not in data
assert data["data"]["load_drafts"]["drafts"] == []

64
tests/test_reactions.py Normal file
View File

@@ -0,0 +1,64 @@
from datetime import datetime
import pytest
from orm.author import Author
from orm.reaction import ReactionKind
from orm.shout import Shout
@pytest.fixture
def test_setup(db_session):
"""Set up test data."""
now = int(datetime.now().timestamp())
author = Author(name="Test Author", slug="test-author", user="test-user-id")
db_session.add(author)
db_session.flush()
shout = Shout(
title="Test Shout",
slug="test-shout",
created_by=author.id,
body="This is a test shout",
layout="article",
lang="ru",
community=1,
created_at=now,
updated_at=now,
)
db_session.add_all([author, shout])
db_session.commit()
return {"author": author, "shout": shout}
@pytest.mark.asyncio
async def test_create_reaction(test_client, db_session, test_setup):
"""Test creating a reaction on a shout."""
response = test_client.post(
"/",
json={
"query": """
mutation CreateReaction($reaction: ReactionInput!) {
create_reaction(reaction: $reaction) {
error
reaction {
id
kind
body
created_by {
name
}
}
}
}
""",
"variables": {
"reaction": {"shout": test_setup["shout"].id, "kind": ReactionKind.LIKE.value, "body": "Great post!"}
},
},
)
assert response.status_code == 200
data = response.json()
assert "error" not in data
assert data["data"]["create_reaction"]["reaction"]["kind"] == ReactionKind.LIKE.value

85
tests/test_shouts.py Normal file
View File

@@ -0,0 +1,85 @@
from datetime import datetime
import pytest
from orm.author import Author
from orm.shout import Shout
@pytest.fixture
def test_shout(db_session):
"""Create test shout with required fields."""
now = int(datetime.now().timestamp())
author = Author(name="Test Author", slug="test-author", user="test-user-id")
db_session.add(author)
db_session.flush()
now = int(datetime.now().timestamp())
shout = Shout(
title="Test Shout",
slug="test-shout",
created_by=author.id,
body="Test body",
layout="article",
lang="ru",
community=1,
created_at=now,
updated_at=now,
)
db_session.add(shout)
db_session.commit()
return shout
@pytest.mark.asyncio
async def test_get_shout(test_client, db_session):
"""Test retrieving a shout."""
# Создаем автора
author = Author(name="Test Author", slug="test-author", user="test-user-id")
db_session.add(author)
db_session.flush()
now = int(datetime.now().timestamp())
# Создаем публикацию со всеми обязательными полями
shout = Shout(
title="Test Shout",
body="This is a test shout",
slug="test-shout",
created_by=author.id,
layout="article",
lang="ru",
community=1,
created_at=now,
updated_at=now,
)
db_session.add(shout)
db_session.commit()
response = test_client.post(
"/",
json={
"query": """
query GetShout($slug: String!) {
get_shout(slug: $slug) {
id
title
body
created_at
updated_at
created_by {
id
name
slug
}
}
}
""",
"variables": {"slug": "test-shout"},
},
)
data = response.json()
assert response.status_code == 200
assert "errors" not in data
assert data["data"]["get_shout"]["title"] == "Test Shout"

70
tests/test_validations.py Normal file
View File

@@ -0,0 +1,70 @@
from datetime import datetime, timedelta
import pytest
from pydantic import ValidationError
from auth.validations import (
AuthInput,
AuthResponse,
TokenPayload,
UserRegistrationInput,
)
class TestAuthValidations:
def test_auth_input(self):
"""Test basic auth input validation"""
# Valid case
auth = AuthInput(user_id="123", username="testuser", token="1234567890abcdef1234567890abcdef")
assert auth.user_id == "123"
assert auth.username == "testuser"
# Invalid cases
with pytest.raises(ValidationError):
AuthInput(user_id="", username="test", token="x" * 32)
with pytest.raises(ValidationError):
AuthInput(user_id="123", username="t", token="x" * 32)
def test_user_registration(self):
"""Test user registration validation"""
# Valid case
user = UserRegistrationInput(email="test@example.com", password="SecurePass123!", name="Test User")
assert user.email == "test@example.com"
assert user.name == "Test User"
# Test email validation
with pytest.raises(ValidationError) as exc:
UserRegistrationInput(email="invalid-email", password="SecurePass123!", name="Test")
assert "Invalid email format" in str(exc.value)
# Test password validation
with pytest.raises(ValidationError) as exc:
UserRegistrationInput(email="test@example.com", password="weak", name="Test")
assert "String should have at least 8 characters" in str(exc.value)
def test_token_payload(self):
"""Test token payload validation"""
now = datetime.utcnow()
exp = now + timedelta(hours=1)
payload = TokenPayload(user_id="123", username="testuser", exp=exp, iat=now)
assert payload.user_id == "123"
assert payload.username == "testuser"
assert payload.scopes == [] # Default empty list
def test_auth_response(self):
"""Test auth response validation"""
# Success case
success_resp = AuthResponse(success=True, token="valid_token", user={"id": "123", "name": "Test"})
assert success_resp.success is True
assert success_resp.token == "valid_token"
# Error case
error_resp = AuthResponse(success=False, error="Invalid credentials")
assert error_resp.success is False
assert error_resp.error == "Invalid credentials"
# Invalid case - отсутствует обязательное поле token при success=True
with pytest.raises(ValidationError):
AuthResponse(success=True, user={"id": "123", "name": "Test"})

View File

@@ -1,9 +1,28 @@
import json
from decimal import Decimal
from json import JSONEncoder
class CustomJSONEncoder(json.JSONEncoder):
class CustomJSONEncoder(JSONEncoder):
"""
Расширенный JSON энкодер с поддержкой сериализации объектов SQLAlchemy.
Примеры:
>>> import json
>>> from decimal import Decimal
>>> from orm.topic import Topic
>>> json.dumps(Decimal("10.50"), cls=CustomJSONEncoder)
'"10.50"'
>>> topic = Topic(id=1, slug="test")
>>> json.dumps(topic, cls=CustomJSONEncoder)
'{"id": 1, "slug": "test", ...}'
"""
def default(self, obj):
if isinstance(obj, Decimal):
return str(obj)
# Проверяем, есть ли у объекта метод dict() (как у моделей SQLAlchemy)
if hasattr(obj, "dict") and callable(obj.dict):
return obj.dict()
return super().default(obj)

View File

@@ -1,15 +1,38 @@
import logging
from pathlib import Path
import colorlog
_lib_path = Path(__file__).parents[1]
_leng_path = len(_lib_path.as_posix())
def filter(record: logging.LogRecord):
# Define `package` attribute with the relative path.
record.package = record.pathname[_leng_path + 1 :].replace(".py", "")
record.emoji = (
"🔍"
if record.levelno == logging.DEBUG
else ""
if record.levelno == logging.INFO
else "🚧"
if record.levelno == logging.WARNING
else ""
if record.levelno == logging.ERROR
else "🧨"
if record.levelno == logging.CRITICAL
else ""
)
return record
# Define the color scheme
color_scheme = {
"DEBUG": "cyan",
"DEBUG": "light_black",
"INFO": "green",
"WARNING": "yellow",
"ERROR": "red",
"CRITICAL": "red,bg_white",
"DEFAULT": "white",
}
# Define secondary log colors
@@ -17,12 +40,12 @@ secondary_colors = {
"log_name": {"DEBUG": "blue"},
"asctime": {"DEBUG": "cyan"},
"process": {"DEBUG": "purple"},
"module": {"DEBUG": "cyan,bg_blue"},
"funcName": {"DEBUG": "light_white,bg_blue"},
"module": {"DEBUG": "light_black,bg_blue"},
"funcName": {"DEBUG": "light_white,bg_blue"}, # Add this line
}
# Define the log format string
fmt_string = "%(log_color)s%(levelname)s: %(log_color)s[%(module)s.%(funcName)s]%(reset)s %(white)s%(message)s"
fmt_string = "%(emoji)s%(log_color)s%(package)s.%(funcName)s%(reset)s %(white)s%(message)s"
# Define formatting configuration
fmt_config = {
@@ -40,6 +63,10 @@ class MultilineColoredFormatter(colorlog.ColoredFormatter):
self.secondary_log_colors = kwargs.pop("secondary_log_colors", {})
def format(self, record):
# Add default emoji if not present
if not hasattr(record, "emoji"):
record = filter(record)
message = record.getMessage()
if "\n" in message:
lines = message.split("\n")
@@ -61,8 +88,24 @@ formatter = MultilineColoredFormatter(fmt_string, **fmt_config)
stream = logging.StreamHandler()
stream.setFormatter(formatter)
def get_colorful_logger(name="main"):
# Create and configure the logger
logger = logging.getLogger(name)
logger.setLevel(logging.DEBUG)
logger.addHandler(stream)
logger.addFilter(filter)
return logger
# Set up the root logger with the same formatting
root_logger = logging.getLogger()
if not root_logger.hasHandlers():
root_logger.setLevel(logging.DEBUG)
root_logger.addHandler(stream)
root_logger.setLevel(logging.DEBUG)
root_logger.addHandler(stream)
root_logger.addFilter(filter)
ignore_logs = ["_trace", "httpx", "_client", "atrace", "aiohttp", "_client"]
for lgr in ignore_logs:
loggr = logging.getLogger(lgr)
loggr.setLevel(logging.INFO)