63 Commits

Author SHA1 Message Date
Stepan Vladovskiy
e1d1096674 feat: without staging deploying by gitea
All checks were successful
Deploy on push / deploy (push) Successful in 6s
2025-06-02 18:17:24 -03:00
Stepan Vladovskiy
82870a4e47 debug: prechase wrapped for time out
All checks were successful
Deploy on push / deploy (push) Successful in 1m15s
2025-05-20 11:26:30 -03:00
Stepan Vladovskiy
80b909d801 debug: with logs in prechashing process
All checks were successful
Deploy on push / deploy (push) Successful in 44s
2025-05-20 11:23:00 -03:00
Stepan Vladovskiy
1ada0a02f9 debug: with timeout for prechashing 2025-05-20 11:19:58 -03:00
Stepan Vladovskiy
44aef147b5 debug: moved precache to background to avoid stucking ...
All checks were successful
Deploy on push / deploy (push) Successful in 45s
2025-05-20 11:03:02 -03:00
Stepan Vladovskiy
2bebfbd4df debug: force rebuild core stag branch
All checks were successful
Deploy on push / deploy (push) Successful in 44s
2025-05-19 15:45:13 -03:00
Stepan Vladovskiy
f19248184a debug: without ersions for starlette and ariadne
All checks were successful
Deploy on push / deploy (push) Successful in 1m5s
2025-05-18 22:48:34 +00:00
Stepan Vladovskiy
7df9361daa debug: Dockerfile with build-essential
Some checks failed
Deploy on push / deploy (push) Failing after 1m36s
2025-05-18 22:43:20 +00:00
Stepan Vladovskiy
e38a1c1338 with vers for starlette, ariadne, granian
Some checks failed
Deploy on push / deploy (push) Failing after 50s
2025-05-18 22:36:47 +00:00
Stepan Vladovskiy
1281157d93 feat: check before parse graphQL
All checks were successful
Deploy on push / deploy (push) Successful in 44s
2025-05-14 14:42:40 -03:00
Stepan Vladovskiy
0018749905 Merge branch 'dev' into staging
All checks were successful
Deploy on push / deploy (push) Successful in 1m28s
2025-05-14 14:33:52 -03:00
a6b3b21894 draft-topics-fix2
All checks were successful
Deploy on push / deploy (push) Successful in 45s
2025-05-07 10:37:18 +03:00
51de649686 draft-topics-fix
All checks were successful
Deploy on push / deploy (push) Successful in 47s
2025-05-07 10:22:30 +03:00
2b7d5a25b5 unpublish-fix7
All checks were successful
Deploy on push / deploy (push) Successful in 46s
2025-05-03 11:52:14 +03:00
32cb810f51 unpublish-fix7 2025-05-03 11:52:10 +03:00
d2a8c23076 unpublish-fix5
All checks were successful
Deploy on push / deploy (push) Successful in 45s
2025-05-03 11:47:35 +03:00
96afda77a6 unpublish-fix5
All checks were successful
Deploy on push / deploy (push) Successful in 47s
2025-05-03 11:35:03 +03:00
785548d055 cache-revalidation-fix
All checks were successful
Deploy on push / deploy (push) Successful in 47s
2025-05-03 11:11:14 +03:00
d6202561a9 unpublish-fix4
All checks were successful
Deploy on push / deploy (push) Successful in 47s
2025-05-03 11:07:03 +03:00
3fbd2e677a unpublish-fix3
All checks were successful
Deploy on push / deploy (push) Successful in 47s
2025-05-03 11:00:19 +03:00
4f1eab513a unpublish-fix2
All checks were successful
Deploy on push / deploy (push) Successful in 46s
2025-05-03 10:57:55 +03:00
44852a1553 delete-draft-fix2 2025-05-03 10:56:34 +03:00
58ec60262b unpublish,delete-draft-fix
All checks were successful
Deploy on push / deploy (push) Successful in 1m23s
2025-05-03 10:53:40 +03:00
Stepan Vladovskiy
c344fcee2d refactoring(search.py): logs for search-combine and search-authors are equal
All checks were successful
Deploy on push / deploy (push) Successful in 6s
2025-05-02 18:28:06 -03:00
Stepan Vladovskiy
a1a61a6731 feat: follow same logic as search shouts for authors. Store them to Reddis cache + pagination
All checks were successful
Deploy on push / deploy (push) Successful in 41s
2025-05-02 18:17:05 -03:00
Stepan Vladovskiy
8d6ad2c84f refactor(author.py): remove verbose loging in resolver level 2025-05-02 18:04:10 -03:00
Stepan Vladovskiy
beba1992e9 fix(__init.py__): clean name of resolver for authors search loading
All checks were successful
Deploy on push / deploy (push) Successful in 39s
2025-04-29 19:49:47 -03:00
Stepan Vladovskiy
b0296d7747 fix(__init.py__): added created resolver in resolvers lists
All checks were successful
Deploy on push / deploy (push) Successful in 41s
2025-04-29 19:40:20 -03:00
Stepan Vladovskiy
98e3dff35e fix(author.py): resolver load_authors_search error fix
All checks were successful
Deploy on push / deploy (push) Successful in 40s
2025-04-29 18:00:38 -03:00
Stepan Vladovskiy
3782a9dffb fix(search.py, author.py): small fixes for start. logger import fails
All checks were successful
Deploy on push / deploy (push) Successful in 40s
2025-04-29 17:50:51 -03:00
Stepan Vladovskiy
93c00b3dd1 feat(author.py):addresolver for searching authors by text
All checks were successful
Deploy on push / deploy (push) Successful in 1m15s
2025-04-29 17:45:37 -03:00
5f3d90fc90 draft-publication-debug
All checks were successful
Deploy on push / deploy (push) Successful in 48s
2025-04-28 16:24:08 +03:00
f71fc7fde9 draft-publication-info
All checks were successful
Deploy on push / deploy (push) Successful in 46s
2025-04-28 11:10:18 +03:00
ed71405082 topic-commented-stat-fix4
All checks were successful
Deploy on push / deploy (push) Successful in 46s
2025-04-28 10:30:58 +03:00
79e1f15a2e topic-commented-stat-fix3
All checks were successful
Deploy on push / deploy (push) Successful in 47s
2025-04-28 10:30:04 +03:00
b17acae0af topic-commented-stat-fix2
All checks were successful
Deploy on push / deploy (push) Successful in 46s
2025-04-28 10:24:48 +03:00
d293819ad9 topic-commented-stat-fix
All checks were successful
Deploy on push / deploy (push) Successful in 48s
2025-04-28 10:13:29 +03:00
bcbfdd76e9 html wrap fix
All checks were successful
Deploy on push / deploy (push) Successful in 48s
2025-04-27 12:53:49 +03:00
b735bf8cab draft-creator-adding
All checks were successful
Deploy on push / deploy (push) Successful in 47s
2025-04-27 09:15:07 +03:00
20fd40df0e updated-by-auto-fix
All checks were successful
Deploy on push / deploy (push) Successful in 47s
2025-04-26 23:46:07 +03:00
bde3211a5f updated-by-auto
All checks were successful
Deploy on push / deploy (push) Successful in 49s
2025-04-26 17:07:02 +03:00
4cd8883d72 validhtmlfix 2025-04-26 17:02:55 +03:00
0939e91700 empty-body-fix
All checks were successful
Deploy on push / deploy (push) Successful in 45s
2025-04-26 16:19:33 +03:00
dfbdfba2f0 draft-create-fix5
All checks were successful
Deploy on push / deploy (push) Successful in 46s
2025-04-26 16:13:07 +03:00
b66e347c91 draft-create-fix
All checks were successful
Deploy on push / deploy (push) Successful in 45s
2025-04-26 16:03:41 +03:00
6d9513f1b2 reaction-by-fix4
All checks were successful
Deploy on push / deploy (push) Successful in 45s
2025-04-26 15:57:51 +03:00
af7fbd2fc9 reaction-by-fix3
All checks were successful
Deploy on push / deploy (push) Successful in 47s
2025-04-26 15:50:20 +03:00
631ad47fe8 reaction-by-fix2
All checks were successful
Deploy on push / deploy (push) Successful in 46s
2025-04-26 15:47:44 +03:00
3b3cc1c1d8 reaction-by-ашч
All checks were successful
Deploy on push / deploy (push) Successful in 35s
2025-04-26 15:42:15 +03:00
e4943f524c reaction-by-upgrade2
All checks were successful
Deploy on push / deploy (push) Successful in 36s
2025-04-26 15:35:31 +03:00
e7684c9c05 reaction-by-upgrade
All checks were successful
Deploy on push / deploy (push) Successful in 36s
2025-04-26 14:10:05 +03:00
bdae2abe25 drafts schema restore + publish/unpublish fixes
All checks were successful
Deploy on push / deploy (push) Successful in 32s
2025-04-26 13:11:12 +03:00
a310d59432 draft-resolvers
All checks were successful
Deploy on push / deploy (push) Successful in 46s
2025-04-26 11:45:16 +03:00
6b2ac09f74 unpublish-fix
All checks were successful
Deploy on push / deploy (push) Successful in 46s
2025-04-26 10:16:55 +03:00
Stepan Vladovskiy
fac43e5997 refact(search,reader): withput any kind of sorting
All checks were successful
Deploy on push / deploy (push) Successful in 42s
2025-04-24 21:00:41 -03:00
Stepan Vladovskiy
e7facf8d87 style(search.py): with indexing message
All checks were successful
Deploy on push / deploy (push) Successful in 42s
2025-04-24 18:45:00 -03:00
Stepan Vladovskiy
3062a2b7de refactor(search.py): with checking titles without bodies for not re indexing them every startup
All checks were successful
Deploy on push / deploy (push) Successful in 42s
2025-04-24 14:58:14 -03:00
Stepan Vladovskiy
c0406dbbf2 refac(search.py): without logger and rm dublicated def search-text
All checks were successful
Deploy on push / deploy (push) Successful in 44s
2025-04-24 14:18:14 -03:00
Stepan Vladovskiy
ab4610575f refactor(reader.py): to handle search combined
All checks were successful
Deploy on push / deploy (push) Successful in 44s
2025-04-24 13:56:38 -03:00
Stepan Vladovskiy
5425dbf832 refactor(search.py): simplify def search 2025-04-24 13:46:58 -03:00
Stepan Vladovskiy
a10db2d38a feat(search.py): combined search on shouts tittles and bodys 2025-04-24 13:35:36 -03:00
5024e963e3 notify-draft-hotfix
All checks were successful
Deploy on push / deploy (push) Successful in 47s
2025-04-24 12:12:48 +03:00
Stepan Vladovskiy
83e70856cd debug(server.py): i dont know why, but it is appears and i am rm it
All checks were successful
Deploy on push / deploy (push) Successful in 41s
2025-04-23 18:32:58 -03:00
23 changed files with 1209 additions and 701 deletions

View File

@@ -1,3 +1,14 @@
#### [0.4.20] - 2025-05-03
- Исправлена ошибка в классе `CacheRevalidationManager`: добавлена инициализация атрибута `_redis`
- Улучшена обработка соединения с Redis в менеджере ревалидации кэша:
- Автоматическое восстановление соединения в случае его потери
- Проверка соединения перед выполнением операций с кэшем
- Дополнительное логирование для упрощения диагностики проблем
- Исправлен резолвер `unpublish_shout`:
- Корректное формирование синтетического поля `publication` с `published_at: null`
- Возвращение полноценного словаря с данными вместо объекта модели
- Улучшена загрузка связанных данных (авторы, темы) для правильного формирования ответа
#### [0.4.19] - 2025-04-14
- dropped `Shout.description` and `Draft.description` to be UX-generated
- use redis to init views counters after migrator

View File

@@ -3,6 +3,7 @@ FROM python:slim
RUN apt-get update && apt-get install -y \
postgresql-client \
curl \
build-essential \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /app

12
cache/precache.py vendored
View File

@@ -77,11 +77,15 @@ async def precache_topics_followers(topic_id: int, session):
async def precache_data():
logger.info("precaching...")
logger.debug("Entering precache_data")
try:
key = "authorizer_env"
logger.debug(f"Fetching existing hash for key '{key}' from Redis")
# cache reset
value = await redis.execute("HGETALL", key)
logger.debug(f"Fetched value for '{key}': {value}")
await redis.execute("FLUSHDB")
logger.debug("Redis database flushed")
logger.info("redis: FLUSHDB")
# Преобразуем словарь в список аргументов для HSET
@@ -97,21 +101,27 @@ async def precache_data():
await redis.execute("HSET", key, *value)
logger.info(f"redis hash '{key}' was restored")
logger.info("Beginning topic precache phase")
with local_session() as session:
# topics
q = select(Topic).where(Topic.community == 1)
topics = get_with_stat(q)
logger.info(f"Found {len(topics)} topics to precache")
for topic in topics:
topic_dict = topic.dict() if hasattr(topic, "dict") else topic
logger.debug(f"Precaching topic id={topic_dict.get('id')}")
await cache_topic(topic_dict)
logger.debug(f"Cached topic id={topic_dict.get('id')}")
await asyncio.gather(
precache_topics_followers(topic_dict["id"], session),
precache_topics_authors(topic_dict["id"], session),
)
logger.debug(f"Finished precaching followers and authors for topic id={topic_dict.get('id')}")
logger.info(f"{len(topics)} topics and their followings precached")
# authors
authors = get_with_stat(select(Author).where(Author.user.is_not(None)))
logger.info(f"Found {len(authors)} authors to precache")
logger.info(f"{len(authors)} authors found in database")
for author in authors:
if isinstance(author, Author):
@@ -119,10 +129,12 @@ async def precache_data():
author_id = profile.get("id")
user_id = profile.get("user", "").strip()
if author_id and user_id:
logger.debug(f"Precaching author id={author_id}")
await cache_author(profile)
await asyncio.gather(
precache_authors_followers(author_id, session), precache_authors_follows(author_id, session)
)
logger.debug(f"Finished precaching followers and follows for author id={author_id}")
else:
logger.error(f"fail caching {author}")
logger.info(f"{len(authors)} authors and their followings precached")

15
cache/revalidator.py vendored
View File

@@ -8,6 +8,7 @@ from cache.cache import (
invalidate_cache_by_prefix,
)
from resolvers.stat import get_with_stat
from services.redis import redis
from utils.logger import root_logger as logger
CACHE_REVALIDATION_INTERVAL = 300 # 5 minutes
@@ -21,9 +22,19 @@ class CacheRevalidationManager:
self.lock = asyncio.Lock()
self.running = True
self.MAX_BATCH_SIZE = 10 # Максимальное количество элементов для поштучной обработки
self._redis = redis # Добавлена инициализация _redis для доступа к Redis-клиенту
async def start(self):
"""Запуск фонового воркера для ревалидации кэша."""
# Проверяем, что у нас есть соединение с Redis
if not self._redis._client:
logger.warning("Redis connection not established. Waiting for connection...")
try:
await self._redis.connect()
logger.info("Redis connection established for revalidation manager")
except Exception as e:
logger.error(f"Failed to connect to Redis: {e}")
self.task = asyncio.create_task(self.revalidate_cache())
async def revalidate_cache(self):
@@ -39,6 +50,10 @@ class CacheRevalidationManager:
async def process_revalidation(self):
"""Обновление кэша для всех сущностей, требующих ревалидации."""
# Проверяем соединение с Redis
if not self._redis._client:
return # Выходим из метода, если не удалось подключиться
async with self.lock:
# Ревалидация кэша авторов
if self.items_to_revalidate["authors"]:

View File

@@ -147,16 +147,32 @@ await invalidate_topics_cache(456)
```python
class CacheRevalidationManager:
def __init__(self, interval=CACHE_REVALIDATION_INTERVAL):
# ...
self._redis = redis # Прямая ссылка на сервис Redis
async def start(self):
# Проверка и установка соединения с Redis
# ...
async def process_revalidation(self):
# Обработка элементов для ревалидации
# ...
def mark_for_revalidation(self, entity_id, entity_type):
# Добавляет сущность в очередь на ревалидацию
# ...
```
Менеджер ревалидации работает как асинхронный фоновый процесс, который периодически (по умолчанию каждые 5 минут) проверяет наличие сущностей для ревалидации.
Особенности реализации:
**Взаимодействие с Redis:**
- CacheRevalidationManager хранит прямую ссылку на сервис Redis через атрибут `_redis`
- При запуске проверяется наличие соединения с Redis и при необходимости устанавливается новое
- Включена автоматическая проверка соединения перед каждой операцией ревалидации
- Система самостоятельно восстанавливает соединение при его потере
**Особенности реализации:**
- Для авторов и тем используется поштучная ревалидация каждой записи
- Для шаутов и реакций используется батчевая обработка, с порогом в 10 элементов
- При достижении порога система переключается на инвалидацию коллекций вместо поштучной обработки

15
main.py
View File

@@ -17,7 +17,6 @@ from cache.revalidator import revalidation_manager
from services.exception import ExceptionHandlerMiddleware
from services.redis import redis
from services.schema import create_all_tables, resolvers
#from services.search import search_service
from services.search import search_service, initialize_search_index
from services.viewed import ViewedStorage
from services.webhook import WebhookEndpoint, create_webhook_endpoint
@@ -43,6 +42,15 @@ async def check_search_service():
else:
print(f"[INFO] Search service is available: {info}")
# Helper to run precache with timeout and catch errors
async def precache_with_timeout():
try:
await asyncio.wait_for(precache_data(), timeout=60)
except asyncio.TimeoutError:
print("[precache] Precache timed out after 60 seconds")
except Exception as e:
print(f"[precache] Error during precache: {e}")
# indexing DB data
# async def indexing():
@@ -53,9 +61,12 @@ async def lifespan(_app):
try:
print("[lifespan] Starting application initialization")
create_all_tables()
# schedule precaching in background with timeout and error handling
asyncio.create_task(precache_with_timeout())
await asyncio.gather(
redis.connect(),
precache_data(),
ViewedStorage.init(),
create_webhook_endpoint(),
check_search_service(),

View File

@@ -6,6 +6,7 @@ from sqlalchemy.orm import relationship
from orm.author import Author
from orm.topic import Topic
from services.db import Base
from orm.shout import Shout
class DraftTopic(Base):
@@ -26,12 +27,14 @@ class DraftAuthor(Base):
caption = Column(String, nullable=True, default="")
class Draft(Base):
__tablename__ = "draft"
# required
created_at: int = Column(Integer, nullable=False, default=lambda: int(time.time()))
created_by: int = Column(ForeignKey("author.id"), nullable=False)
community: int = Column(ForeignKey("community.id"), nullable=False, default=1)
# Колонки для связей с автором
created_by: int = Column("created_by", ForeignKey("author.id"), nullable=False)
community: int = Column("community", ForeignKey("community.id"), nullable=False, default=1)
# optional
layout: str = Column(String, nullable=True, default="article")
@@ -49,7 +52,55 @@ class Draft(Base):
# auto
updated_at: int | None = Column(Integer, nullable=True, index=True)
deleted_at: int | None = Column(Integer, nullable=True, index=True)
updated_by: int | None = Column(ForeignKey("author.id"), nullable=True)
deleted_by: int | None = Column(ForeignKey("author.id"), nullable=True)
authors = relationship(Author, secondary="draft_author")
topics = relationship(Topic, secondary="draft_topic")
updated_by: int | None = Column("updated_by", ForeignKey("author.id"), nullable=True)
deleted_by: int | None = Column("deleted_by", ForeignKey("author.id"), nullable=True)
# --- Relationships ---
# Только many-to-many связи через вспомогательные таблицы
authors = relationship(Author, secondary="draft_author", lazy="select")
topics = relationship(Topic, secondary="draft_topic", lazy="select")
# Связь с Community (если нужна как объект, а не ID)
# community = relationship("Community", foreign_keys=[community_id], lazy="joined")
# Пока оставляем community_id как ID
# Связь с публикацией (один-к-одному или один-к-нулю)
# Загружается через joinedload в резолвере
publication = relationship(
"Shout",
primaryjoin="Draft.id == Shout.draft",
foreign_keys="Shout.draft",
uselist=False,
lazy="noload", # Не грузим по умолчанию, только через options
viewonly=True # Указываем, что это связь только для чтения
)
def dict(self):
"""
Сериализует объект Draft в словарь.
Гарантирует, что поля topics и authors всегда будут списками.
"""
return {
"id": self.id,
"created_at": self.created_at,
"created_by": self.created_by,
"community": self.community,
"layout": self.layout,
"slug": self.slug,
"title": self.title,
"subtitle": self.subtitle,
"lead": self.lead,
"body": self.body,
"media": self.media or [],
"cover": self.cover,
"cover_caption": self.cover_caption,
"lang": self.lang,
"seo": self.seo,
"updated_at": self.updated_at,
"deleted_at": self.deleted_at,
"updated_by": self.updated_by,
"deleted_by": self.deleted_by,
# Гарантируем, что topics и authors всегда будут списками
"topics": [topic.dict() for topic in (self.topics or [])],
"authors": [author.dict() for author in (self.authors or [])]
}

View File

@@ -8,6 +8,7 @@ from resolvers.author import ( # search_authors,
get_author_id,
get_authors_all,
load_authors_by,
load_authors_search,
update_author,
)
from resolvers.community import get_communities_all, get_community
@@ -16,9 +17,11 @@ from resolvers.draft import (
delete_draft,
load_drafts,
publish_draft,
unpublish_draft,
update_draft,
)
from resolvers.editor import (
unpublish_shout,
)
from resolvers.feed import (
load_shouts_coauthored,
load_shouts_discussed,
@@ -71,6 +74,7 @@ __all__ = [
"get_author_follows_authors",
"get_authors_all",
"load_authors_by",
"load_authors_search",
"update_author",
## "search_authors",
# community

View File

@@ -20,6 +20,7 @@ from services.auth import login_required
from services.db import local_session
from services.redis import redis
from services.schema import mutation, query
from services.search import search_service
from utils.logger import root_logger as logger
DEFAULT_COMMUNITIES = [1]
@@ -301,6 +302,46 @@ async def load_authors_by(_, _info, by, limit, offset):
return await get_authors_with_stats(limit, offset, by)
@query.field("load_authors_search")
async def load_authors_search(_, info, text: str, limit: int = 10, offset: int = 0):
"""
Resolver for searching authors by text. Works with txt-ai search endpony.
Args:
text: Search text
limit: Maximum number of authors to return
offset: Offset for pagination
Returns:
list: List of authors matching the search criteria
"""
# Get author IDs from search engine (already sorted by relevance)
search_results = await search_service.search_authors(text, limit, offset)
if not search_results:
return []
author_ids = [result.get("id") for result in search_results if result.get("id")]
if not author_ids:
return []
# Fetch full author objects from DB
with local_session() as session:
# Simple query to get authors by IDs - no need for stats here
authors_query = select(Author).filter(Author.id.in_(author_ids))
db_authors = session.execute(authors_query).scalars().all()
if not db_authors:
return []
# Create a dictionary for quick lookup
authors_dict = {str(author.id): author for author in db_authors}
# Keep the order from search results (maintains the relevance sorting)
ordered_authors = [authors_dict[author_id] for author_id in author_ids if author_id in authors_dict]
return ordered_authors
def get_author_id_from(slug="", user=None, author_id=None):
try:
author_id = None

View File

@@ -1,8 +1,6 @@
import time
from operator import or_
import trafilatura
from sqlalchemy.sql import and_
from sqlalchemy.orm import joinedload
from cache.cache import (
cache_author,
@@ -12,7 +10,7 @@ from cache.cache import (
invalidate_shouts_cache,
)
from orm.author import Author
from orm.draft import Draft
from orm.draft import Draft, DraftAuthor, DraftTopic
from orm.shout import Shout, ShoutAuthor, ShoutTopic
from orm.topic import Topic
from services.auth import login_required
@@ -20,34 +18,70 @@ from services.db import local_session
from services.notify import notify_shout
from services.schema import mutation, query
from services.search import search_service
from utils.html_wrapper import wrap_html_fragment
from utils.logger import root_logger as logger
def create_shout_from_draft(session, draft, author_id):
"""
Создаёт новый объект публикации (Shout) на основе черновика.
Args:
session: SQLAlchemy сессия (не используется, для совместимости)
draft (Draft): Объект черновика
author_id (int): ID автора публикации
Returns:
Shout: Новый объект публикации (не сохранённый в базе)
Пример:
>>> from orm.draft import Draft
>>> draft = Draft(id=1, title='Заголовок', body='Текст', slug='slug', created_by=1)
>>> shout = create_shout_from_draft(None, draft, 1)
>>> shout.title
'Заголовок'
>>> shout.body
'Текст'
>>> shout.created_by
1
"""
# Создаем новую публикацию
shout = Shout(
body=draft.body,
body=draft.body or "",
slug=draft.slug,
cover=draft.cover,
cover_caption=draft.cover_caption,
lead=draft.lead,
title=draft.title,
title=draft.title or "",
subtitle=draft.subtitle,
layout=draft.layout,
media=draft.media,
lang=draft.lang,
layout=draft.layout or "article",
media=draft.media or [],
lang=draft.lang or "ru",
seo=draft.seo,
created_by=author_id,
community=draft.community,
draft=draft.id,
deleted_at=None,
)
# Инициализируем пустые массивы для связей
shout.topics = []
shout.authors = []
return shout
@query.field("load_drafts")
@login_required
async def load_drafts(_, info):
"""
Загружает все черновики, доступные текущему пользователю.
Предварительно загружает связанные объекты (topics, authors, publication),
чтобы избежать ошибок с отсоединенными объектами при сериализации.
Returns:
dict: Список черновиков или сообщение об ошибке
"""
user_id = info.context.get("user_id")
author_dict = info.context.get("author", {})
author_id = author_dict.get("id")
@@ -55,13 +89,44 @@ async def load_drafts(_, info):
if not user_id or not author_id:
return {"error": "User ID and author ID are required"}
try:
with local_session() as session:
drafts = (
# Предзагружаем authors, topics и связанную publication
drafts_query = (
session.query(Draft)
.filter(or_(Draft.authors.any(Author.id == author_id), Draft.created_by == author_id))
.all()
.options(
joinedload(Draft.topics),
joinedload(Draft.authors),
joinedload(Draft.publication) # Загружаем связанную публикацию
)
return {"drafts": drafts}
.filter(Draft.authors.any(Author.id == author_id))
)
drafts = drafts_query.all()
# Преобразуем объекты в словари, пока они в контексте сессии
drafts_data = []
for draft in drafts:
draft_dict = draft.dict()
# Всегда возвращаем массив для topics, даже если он пустой
draft_dict["topics"] = [topic.dict() for topic in (draft.topics or [])]
draft_dict["authors"] = [author.dict() for author in (draft.authors or [])]
# Добавляем информацию о публикации, если она есть
if draft.publication:
draft_dict["publication"] = {
"id": draft.publication.id,
"slug": draft.publication.slug,
"published_at": draft.publication.published_at
}
else:
draft_dict["publication"] = None
drafts_data.append(draft_dict)
return {"drafts": drafts_data}
except Exception as e:
logger.error(f"Failed to load drafts: {e}", exc_info=True)
return {"error": f"Failed to load drafts: {str(e)}"}
@mutation.field("create_draft")
@@ -116,11 +181,17 @@ async def create_draft(_, info, draft_input):
if "id" in draft_input:
del draft_input["id"]
# Добавляем текущее время создания
# Добавляем текущее время создания и ID автора
draft_input["created_at"] = int(time.time())
draft = Draft(created_by=author_id, **draft_input)
draft_input["created_by"] = author_id
draft = Draft(**draft_input)
session.add(draft)
session.flush()
# Добавляем создателя как автора
da = DraftAuthor(shout=draft.id, author=author_id)
session.add(da)
session.commit()
return {"draft": draft}
except Exception as e:
@@ -128,7 +199,8 @@ async def create_draft(_, info, draft_input):
return {"error": f"Failed to create draft: {str(e)}"}
def generate_teaser(body, limit=300):
body_text = trafilatura.extract(body, include_comments=False, include_tables=False)
body_html = wrap_html_fragment(body)
body_text = trafilatura.extract(body_html, include_comments=False, include_tables=False)
body_teaser = ". ".join(body_text[:limit].split(". ")[:-1])
return body_teaser
@@ -140,7 +212,21 @@ async def update_draft(_, info, draft_id: int, draft_input):
Args:
draft_id: ID черновика для обновления
draft_input: Данные для обновления черновика
draft_input: Данные для обновления черновика согласно схеме DraftInput:
- layout: String
- author_ids: [Int!]
- topic_ids: [Int!]
- main_topic_id: Int
- media: [MediaItemInput]
- lead: String
- subtitle: String
- lang: String
- seo: String
- body: String
- title: String
- slug: String
- cover: String
- cover_caption: String
Returns:
dict: Обновленный черновик или сообщение об ошибке
@@ -152,66 +238,89 @@ async def update_draft(_, info, draft_id: int, draft_input):
if not user_id or not author_id:
return {"error": "Author ID are required"}
# Проверяем slug - он должен быть или не пустым, или не передаваться вообще
if "slug" in draft_input and (draft_input["slug"] is None or draft_input["slug"] == ""):
# Если slug пустой, либо удаляем его из входных данных, либо генерируем временный уникальный
# Вариант 1: просто удаляем ключ из входных данных, чтобы оставить старое значение
del draft_input["slug"]
# Вариант 2 (если нужно обновить): генерируем временный уникальный slug
# import uuid
# draft_input["slug"] = f"draft-{uuid.uuid4().hex[:8]}"
try:
with local_session() as session:
draft = session.query(Draft).filter(Draft.id == draft_id).first()
if not draft:
return {"error": "Draft not found"}
# Generate SEO description if not provided and not already set
if "seo" not in draft_input and not draft.seo:
body_src = draft_input.get("body") if "body" in draft_input else draft.body
lead_src = draft_input.get("lead") if "lead" in draft_input else draft.lead
# Фильтруем входные данные, оставляя только разрешенные поля
allowed_fields = {
"layout", "author_ids", "topic_ids", "main_topic_id",
"media", "lead", "subtitle", "lang", "seo", "body",
"title", "slug", "cover", "cover_caption"
}
filtered_input = {k: v for k, v in draft_input.items() if k in allowed_fields}
# Проверяем slug
if "slug" in filtered_input and not filtered_input["slug"]:
del filtered_input["slug"]
# Обновляем связи с авторами если переданы
if "author_ids" in filtered_input:
author_ids = filtered_input.pop("author_ids")
if author_ids:
# Очищаем текущие связи
session.query(DraftAuthor).filter(DraftAuthor.shout == draft_id).delete()
# Добавляем новые связи
for aid in author_ids:
da = DraftAuthor(shout=draft_id, author=aid)
session.add(da)
# Обновляем связи с темами если переданы
if "topic_ids" in filtered_input:
topic_ids = filtered_input.pop("topic_ids")
main_topic_id = filtered_input.pop("main_topic_id", None)
if topic_ids:
# Очищаем текущие связи
session.query(DraftTopic).filter(DraftTopic.shout == draft_id).delete()
# Добавляем новые связи
for tid in topic_ids:
dt = DraftTopic(
shout=draft_id,
topic=tid,
main=(tid == main_topic_id) if main_topic_id else False
)
session.add(dt)
# Генерируем SEO если не предоставлено
if "seo" not in filtered_input and not draft.seo:
body_src = filtered_input.get("body", draft.body)
lead_src = filtered_input.get("lead", draft.lead)
body_html = wrap_html_fragment(body_src)
lead_html = wrap_html_fragment(lead_src)
body_text = None
if body_src:
try:
# Extract text, excluding comments and tables
body_text = trafilatura.extract(body_src, include_comments=False, include_tables=False)
except Exception as e:
logger.warning(f"Trafilatura failed to extract body text for draft {draft_id}: {e}")
body_text = trafilatura.extract(body_html, include_comments=False, include_tables=False) if body_src else None
lead_text = trafilatura.extract(lead_html, include_comments=False, include_tables=False) if lead_src else None
lead_text = None
if lead_src:
try:
# Extract text from lead
lead_text = trafilatura.extract(lead_src, include_comments=False, include_tables=False)
except Exception as e:
logger.warning(f"Trafilatura failed to extract lead text for draft {draft_id}: {e}")
# Generate body teaser only if body_text was successfully extracted
body_teaser = generate_teaser(body_text, 300) if body_text else ""
filtered_input["seo"] = lead_text if lead_text else body_teaser
except Exception as e:
logger.warning(f"Failed to generate SEO for draft {draft_id}: {e}")
# Prioritize lead_text for SEO, fallback to body_teaser. Ensure it's a string.
generated_seo = lead_text if lead_text else body_teaser
draft_input["seo"] = generated_seo if generated_seo else ""
# Обновляем основные поля черновика
for key, value in filtered_input.items():
setattr(draft, key, value)
# Update the draft object with new data from draft_input
# Assuming Draft.update is a helper that iterates keys or similar.
# A more standard SQLAlchemy approach would be:
# for key, value in draft_input.items():
# if hasattr(draft, key):
# setattr(draft, key, value)
# But we stick to the existing pattern for now.
Draft.update(draft, draft_input)
# Set updated timestamp and author
current_time = int(time.time())
draft.updated_at = current_time
draft.updated_by = author_id # Assuming author_id is correctly fetched context
# Обновляем метаданные
draft.updated_at = int(time.time())
draft.updated_by = author_id
session.commit()
# Invalidate cache related to this draft if necessary (consider adding)
# await invalidate_draft_cache(draft_id)
return {"draft": draft}
# Преобразуем объект в словарь для ответа
draft_dict = draft.dict()
draft_dict["topics"] = [topic.dict() for topic in draft.topics]
draft_dict["authors"] = [author.dict() for author in draft.authors]
# Добавляем объект автора в updated_by
draft_dict["updated_by"] = author_dict
return {"draft": draft_dict}
except Exception as e:
logger.error(f"Failed to update draft: {e}", exc_info=True)
return {"error": f"Failed to update draft: {str(e)}"}
@mutation.field("delete_draft")
@@ -231,182 +340,136 @@ async def delete_draft(_, info, draft_id: int):
return {"draft": draft}
def validate_html_content(html_content: str) -> tuple[bool, str]:
"""
Проверяет валидность HTML контента через trafilatura.
Args:
html_content: HTML строка для проверки
Returns:
tuple[bool, str]: (валидность, сообщение об ошибке)
Example:
>>> is_valid, error = validate_html_content("<p>Valid HTML</p>")
>>> is_valid
True
>>> error
''
>>> is_valid, error = validate_html_content("Invalid < HTML")
>>> is_valid
False
>>> 'Invalid HTML' in error
True
"""
if not html_content or not html_content.strip():
return False, "Content is empty"
try:
html_content = wrap_html_fragment(html_content)
extracted = trafilatura.extract(html_content)
if not extracted:
return False, "Invalid HTML structure or empty content"
return True, ""
except Exception as e:
logger.error(f"HTML validation error: {e}", exc_info=True)
return False, f"Invalid HTML content: {str(e)}"
@mutation.field("publish_draft")
@login_required
async def publish_draft(_, info, draft_id: int):
user_id = info.context.get("user_id")
author_dict = info.context.get("author", {})
author_id = author_dict.get("id")
if not user_id or not author_id:
return {"error": "User ID and author ID are required"}
with local_session() as session:
draft = session.query(Draft).filter(Draft.id == draft_id).first()
if not draft:
return {"error": "Draft not found"}
shout = create_shout_from_draft(session, draft, author_id)
session.add(shout)
session.commit()
return {"shout": shout, "draft": draft}
@mutation.field("unpublish_draft")
@login_required
async def unpublish_draft(_, info, draft_id: int):
user_id = info.context.get("user_id")
author_dict = info.context.get("author", {})
author_id = author_dict.get("id")
if not user_id or not author_id:
return {"error": "User ID and author ID are required"}
with local_session() as session:
draft = session.query(Draft).filter(Draft.id == draft_id).first()
if not draft:
return {"error": "Draft not found"}
shout = session.query(Shout).filter(Shout.draft == draft.id).first()
if shout:
shout.published_at = None
session.commit()
return {"shout": shout, "draft": draft}
return {"error": "Failed to unpublish draft"}
@mutation.field("publish_shout")
@login_required
async def publish_shout(_, info, shout_id: int):
"""Publish draft as a shout or update existing shout.
"""
Публикует черновик, создавая новый Shout или обновляя существующий.
Args:
shout_id: ID существующей публикации или 0 для новой
draft: Объект черновика (опционально)
draft_id (int): ID черновика для публикации
Returns:
dict: Результат публикации с shout или сообщением об ошибке
"""
user_id = info.context.get("user_id")
author_dict = info.context.get("author", {})
author_id = author_dict.get("id")
now = int(time.time())
if not user_id or not author_id:
return {"error": "User ID and author ID are required"}
return {"error": "Author ID is required"}
try:
with local_session() as session:
shout = session.query(Shout).filter(Shout.id == shout_id).first()
if not shout:
return {"error": "Shout not found"}
was_published = shout.published_at is not None
draft = session.query(Draft).where(Draft.id == shout.draft).first()
# Загружаем черновик со всеми связями
draft = (
session.query(Draft)
.options(
joinedload(Draft.topics),
joinedload(Draft.authors),
joinedload(Draft.publication)
)
.filter(Draft.id == draft_id)
.first()
)
if not draft:
return {"error": "Draft not found"}
# Находим черновик если не передан
if not shout:
shout = create_shout_from_draft(session, draft, author_id)
else:
# Проверка валидности HTML в body
is_valid, error = validate_html_content(draft.body)
if not is_valid:
return {"error": f"Cannot publish draft: {error}"}
# Проверяем, есть ли уже публикация для этого черновика
if draft.publication:
shout = draft.publication
# Обновляем существующую публикацию
shout.draft = draft.id
shout.created_by = author_id
shout.title = draft.title
shout.subtitle = draft.subtitle
shout.body = draft.body
shout.cover = draft.cover
shout.cover_caption = draft.cover_caption
shout.lead = draft.lead
shout.layout = draft.layout
shout.media = draft.media
shout.lang = draft.lang
shout.seo = draft.seo
draft.updated_at = now
shout.updated_at = now
# Устанавливаем published_at только если была ранее снята с публикации
if not was_published:
for field in ["body", "title", "subtitle", "lead", "cover", "cover_caption", "media", "lang", "seo"]:
if hasattr(draft, field):
setattr(shout, field, getattr(draft, field))
shout.updated_at = int(time.time())
shout.updated_by = author_id
else:
# Создаем новую публикацию
shout = create_shout_from_draft(session, draft, author_id)
now = int(time.time())
shout.created_at = now
shout.published_at = now
session.add(shout)
session.flush() # Получаем ID нового шаута
# Обрабатываем связи с авторами
if (
not session.query(ShoutAuthor)
.filter(and_(ShoutAuthor.shout == shout.id, ShoutAuthor.author == author_id))
.first()
):
sa = ShoutAuthor(shout=shout.id, author=author_id)
# Очищаем существующие связи
session.query(ShoutAuthor).filter(ShoutAuthor.shout == shout.id).delete()
session.query(ShoutTopic).filter(ShoutTopic.shout == shout.id).delete()
# Добавляем авторов
for author in (draft.authors or []):
sa = ShoutAuthor(shout=shout.id, author=author.id)
session.add(sa)
# Обрабатываем темы
if draft.topics:
for topic in draft.topics:
# Добавляем темы
for topic in (draft.topics or []):
st = ShoutTopic(
topic=topic.id, shout=shout.id, main=topic.main if hasattr(topic, "main") else False
topic=topic.id,
shout=shout.id,
main=topic.main if hasattr(topic, "main") else False
)
session.add(st)
session.add(shout)
session.add(draft)
session.flush()
session.commit()
# Инвалидируем кэш только если это новая публикация или была снята с публикации
if not was_published:
cache_keys = ["feed", f"author_{author_id}", "random_top", "unrated"]
# Инвалидируем кеш
invalidate_shouts_cache()
invalidate_shout_related_cache(shout.id)
# Добавляем ключи для тем
for topic in shout.topics:
cache_keys.append(f"topic_{topic.id}")
cache_keys.append(f"topic_shouts_{topic.id}")
await cache_by_id(Topic, topic.id, cache_topic)
# Инвалидируем кэш
await invalidate_shouts_cache(cache_keys)
await invalidate_shout_related_cache(shout, author_id)
# Обновляем кэш авторов
for author in shout.authors:
await cache_by_id(Author, author.id, cache_author)
# Отправляем уведомление о публикации
await notify_shout(shout.dict(), "published")
# Уведомляем о публикации
await notify_shout(shout.id)
# Обновляем поисковый индекс
search_service.index(shout)
else:
# Для уже опубликованных материалов просто отправляем уведомление об обновлении
await notify_shout(shout.dict(), "update")
search_service.index_shout(shout)
logger.info(f"Successfully published shout #{shout.id} from draft #{draft_id}")
logger.debug(f"Shout data: {shout.dict()}")
session.commit()
return {"shout": shout}
except Exception as e:
logger.error(f"Failed to publish shout: {e}", exc_info=True)
if "session" in locals():
session.rollback()
return {"error": f"Failed to publish shout: {str(e)}"}
@mutation.field("unpublish_shout")
@login_required
async def unpublish_shout(_, info, shout_id: int):
"""Unpublish a shout.
Args:
shout_id: The ID of the shout to unpublish
Returns:
dict: The unpublished shout or an error message
"""
author_dict = info.context.get("author", {})
author_id = author_dict.get("id")
if not author_id:
return {"error": "Author ID is required"}
shout = None
with local_session() as session:
try:
shout = session.query(Shout).filter(Shout.id == shout_id).first()
shout.published_at = None
session.commit()
invalidate_shout_related_cache(shout)
invalidate_shouts_cache()
except Exception:
session.rollback()
return {"error": "Failed to unpublish shout"}
return {"shout": shout}
logger.error(f"Failed to publish draft {draft_id}: {e}", exc_info=True)
return {"error": f"Failed to publish draft: {str(e)}"}

View File

@@ -3,7 +3,7 @@ import time
import orjson
import trafilatura
from sqlalchemy import and_, desc, select
from sqlalchemy.orm import joinedload
from sqlalchemy.orm import joinedload, selectinload
from sqlalchemy.sql.functions import coalesce
from cache.cache import (
@@ -13,6 +13,7 @@ from cache.cache import (
invalidate_shouts_cache,
)
from orm.author import Author
from orm.draft import Draft
from orm.shout import Shout, ShoutAuthor, ShoutTopic
from orm.topic import Topic
from resolvers.follower import follow, unfollow
@@ -20,8 +21,9 @@ from resolvers.stat import get_with_stat
from services.auth import login_required
from services.db import local_session
from services.notify import notify_shout
from services.schema import query
from services.schema import mutation, query
from services.search import search_service
from utils.html_wrapper import wrap_html_fragment
from utils.logger import root_logger as logger
@@ -179,9 +181,11 @@ async def create_shout(_, info, inp):
# Создаем публикацию без topics
body = inp.get("body", "")
lead = inp.get("lead", "")
body_text = trafilatura.extract(body)
lead_text = trafilatura.extract(lead)
seo = inp.get("seo", lead_text or body_text[:300].split(". ")[:-1].join(". "))
body_html = wrap_html_fragment(body)
lead_html = wrap_html_fragment(lead)
body_text = trafilatura.extract(body_html)
lead_text = trafilatura.extract(lead_html)
seo = inp.get("seo", lead_text.strip() or body_text.strip()[:300].split(". ")[:-1].join(". "))
new_shout = Shout(
slug=slug,
body=body,
@@ -645,15 +649,18 @@ def get_main_topic(topics):
"""Get the main topic from a list of ShoutTopic objects."""
logger.info(f"Starting get_main_topic with {len(topics) if topics else 0} topics")
logger.debug(
f"Topics data: {[(t.topic.slug if t.topic else 'no-topic', t.main) for t in topics] if topics else []}"
f"Topics data: {[(t.slug, getattr(t, 'main', False)) for t in topics] if topics else []}"
)
if not topics:
logger.warning("No topics provided to get_main_topic")
return {"id": 0, "title": "no topic", "slug": "notopic", "is_main": True}
# Проверяем, является ли topics списком объектов ShoutTopic или Topic
if hasattr(topics[0], 'topic') and topics[0].topic:
# Для ShoutTopic объектов (старый формат)
# Find first main topic in original order
main_topic_rel = next((st for st in topics if st.main), None)
main_topic_rel = next((st for st in topics if getattr(st, 'main', False)), None)
logger.debug(
f"Found main topic relation: {main_topic_rel.topic.slug if main_topic_rel and main_topic_rel.topic else None}"
)
@@ -678,6 +685,142 @@ def get_main_topic(topics):
"is_main": True,
}
return result
else:
# Для Topic объектов (новый формат из selectinload)
# После смены на selectinload у нас просто список Topic объектов
if topics:
logger.info(f"Using first topic as main: {topics[0].slug}")
result = {
"slug": topics[0].slug,
"title": topics[0].title,
"id": topics[0].id,
"is_main": True,
}
return result
logger.warning("No valid topics found, returning default")
return {"slug": "notopic", "title": "no topic", "id": 0, "is_main": True}
@mutation.field("unpublish_shout")
@login_required
async def unpublish_shout(_, info, shout_id: int):
"""Снимает публикацию (shout) с публикации.
Предзагружает связанный черновик (draft) и его авторов/темы, чтобы избежать
ошибок при последующем доступе к ним в GraphQL.
Args:
shout_id: ID публикации для снятия с публикации
Returns:
dict: Снятая с публикации публикация или сообщение об ошибке
"""
author_dict = info.context.get("author", {})
author_id = author_dict.get("id")
if not author_id:
# В идеале нужна проверка прав, имеет ли автор право снимать публикацию
return {"error": "Author ID is required"}
shout = None
with local_session() as session:
try:
# Загружаем Shout со всеми связями для правильного формирования ответа
shout = (
session.query(Shout)
.options(
joinedload(Shout.authors),
selectinload(Shout.topics)
)
.filter(Shout.id == shout_id)
.first()
)
if not shout:
logger.warning(f"Shout not found for unpublish: ID {shout_id}")
return {"error": "Shout not found"}
# Если у публикации есть связанный черновик, загружаем его с relationships
if shout.draft:
# Отдельно загружаем черновик с его связями
draft = (
session.query(Draft)
.options(
selectinload(Draft.authors),
selectinload(Draft.topics)
)
.filter(Draft.id == shout.draft)
.first()
)
# Связываем черновик с публикацией вручную для доступа через API
if draft:
shout.draft_obj = draft
# TODO: Добавить проверку прав доступа, если необходимо
# if author_id not in [a.id for a in shout.authors]: # Требует selectinload(Shout.authors) выше
# logger.warning(f"Author {author_id} denied unpublishing shout {shout_id}")
# return {"error": "Access denied"}
# Запоминаем старый slug и id для формирования поля publication
shout_slug = shout.slug
shout_id_for_publication = shout.id
# Снимаем с публикации (устанавливаем published_at в None)
shout.published_at = None
session.commit()
# Формируем полноценный словарь для ответа
shout_dict = shout.dict()
# Добавляем связанные данные
shout_dict["topics"] = (
[
{"id": topic.id, "slug": topic.slug, "title": topic.title}
for topic in shout.topics
]
if shout.topics
else []
)
# Добавляем main_topic
shout_dict["main_topic"] = get_main_topic(shout.topics)
# Добавляем авторов
shout_dict["authors"] = (
[
{"id": author.id, "name": author.name, "slug": author.slug}
for author in shout.authors
]
if shout.authors
else []
)
# Важно! Обновляем поле publication, отражая состояние "снят с публикации"
shout_dict["publication"] = {
"id": shout_id_for_publication,
"slug": shout_slug,
"published_at": None # Ключевое изменение - устанавливаем published_at в None
}
# Инвалидация кэша
try:
cache_keys = [
"feed", # лента
f"author_{author_id}", # публикации автора
"random_top", # случайные топовые
"unrated", # неоцененные
]
await invalidate_shout_related_cache(shout, author_id)
await invalidate_shouts_cache(cache_keys)
logger.info(f"Cache invalidated after unpublishing shout {shout_id}")
except Exception as cache_err:
logger.error(f"Failed to invalidate cache for unpublish shout {shout_id}: {cache_err}")
except Exception as e:
session.rollback()
logger.error(f"Failed to unpublish shout {shout_id}: {e}", exc_info=True)
return {"error": f"Failed to unpublish shout: {str(e)}"}
# Возвращаем сформированный словарь вместо объекта
logger.info(f"Shout {shout_id} unpublished successfully by author {author_id}")
return {"shout": shout_dict}

View File

@@ -488,11 +488,15 @@ def apply_reaction_filters(by, q):
if shout_slug:
q = q.filter(Shout.slug == shout_slug)
shout_id = by.get("shout_id")
if shout_id:
q = q.filter(Shout.id == shout_id)
shouts = by.get("shouts")
if shouts:
q = q.filter(Shout.slug.in_(shouts))
created_by = by.get("created_by")
created_by = by.get("created_by", by.get("author_id"))
if created_by:
q = q.filter(Author.id == created_by)

View File

@@ -396,38 +396,25 @@ async def load_shouts_search(_, info, text, options):
# Get search results with pagination
results = await search_text(text, limit, offset)
# If no results, return empty list
if not results:
logger.info(f"No search results found for '{text}'")
return []
# Extract IDs and scores
scores = {}
hits_ids = []
for sr in results:
shout_id = sr.get("id")
if shout_id:
shout_id = str(shout_id)
scores[shout_id] = sr.get("score")
hits_ids.append(shout_id)
# Extract IDs in the order from the search engine
hits_ids = [str(sr.get("id")) for sr in results if sr.get("id")]
# Query DB for only the IDs in the current page
q = query_with_stat(info)
q = q.filter(Shout.id.in_(hits_ids))
q = apply_filters(q, options.get("filters", {}))
#
shouts = get_shouts_with_links(info, q, len(hits_ids), 0)
# Add scores from search results
for shout in shouts:
shout_id = str(shout['id'])
shout["score"] = scores.get(shout_id, 0)
# Reorder shouts to match the order from hits_ids
shouts_dict = {str(shout['id']): shout for shout in shouts}
ordered_shouts = [shouts_dict[shout_id] for shout_id in hits_ids if shout_id in shouts_dict]
# Re-sort by search score to maintain ranking
shouts.sort(key=lambda x: scores.get(str(x['id']), 0), reverse=True)
return shouts
return ordered_shouts
return []

View File

@@ -10,6 +10,7 @@ from cache.cache import (
)
from orm.author import Author
from orm.topic import Topic
from orm.reaction import ReactionKind
from resolvers.stat import get_with_stat
from services.auth import login_required
from services.db import local_session
@@ -112,7 +113,7 @@ async def get_topics_with_stats(limit=100, offset=0, community_id=None, by=None)
shouts_stats_query = f"""
SELECT st.topic, COUNT(DISTINCT s.id) as shouts_count
FROM shout_topic st
JOIN shout s ON st.shout = s.id AND s.deleted_at IS NULL
JOIN shout s ON st.shout = s.id AND s.deleted_at IS NULL AND s.published_at IS NOT NULL
WHERE st.topic IN ({",".join(map(str, topic_ids))})
GROUP BY st.topic
"""
@@ -121,7 +122,7 @@ async def get_topics_with_stats(limit=100, offset=0, community_id=None, by=None)
# Запрос на получение статистики по подписчикам для выбранных тем
followers_stats_query = f"""
SELECT topic, COUNT(DISTINCT follower) as followers_count
FROM topic_followers
FROM topic_followers tf
WHERE topic IN ({",".join(map(str, topic_ids))})
GROUP BY topic
"""
@@ -143,7 +144,8 @@ async def get_topics_with_stats(limit=100, offset=0, community_id=None, by=None)
SELECT st.topic, COUNT(DISTINCT r.id) as comments_count
FROM shout_topic st
JOIN shout s ON st.shout = s.id AND s.deleted_at IS NULL AND s.published_at IS NOT NULL
JOIN reaction r ON r.shout = s.id
JOIN reaction r ON r.shout = s.id AND r.kind = '{ReactionKind.COMMENT.value}' AND r.deleted_at IS NULL
JOIN author a ON r.created_by = a.id AND a.deleted_at IS NULL
WHERE st.topic IN ({",".join(map(str, topic_ids))})
GROUP BY st.topic
"""

View File

@@ -92,12 +92,14 @@ input LoadShoutsOptions {
input ReactionBy {
shout: String
shout_id: Int
shouts: [String]
search: String
kinds: [ReactionKind]
reply_to: Int # filter
topic: String
created_by: Int
author_id: Int
author: String
after: Int
sort: ReactionSort # sort

View File

@@ -4,7 +4,7 @@ type Query {
get_author_id(user: String!): Author
get_authors_all: [Author]
load_authors_by(by: AuthorsBy!, limit: Int, offset: Int): [Author]
# search_authors(what: String!): [Author]
load_authors_search(text: String!, limit: Int, offset: Int): [Author!] # Search for authors by name or bio
# community
get_community: Community

View File

@@ -107,6 +107,12 @@ type Shout {
score: Float
}
type PublicationInfo {
id: Int!
slug: String!
published_at: Int
}
type Draft {
id: Int!
created_at: Int!
@@ -129,9 +135,9 @@ type Draft {
deleted_at: Int
updated_by: Author
deleted_by: Author
authors: [Author]
topics: [Topic]
authors: [Author]!
topics: [Topic]!
publication: PublicationInfo
}
type Stat {

View File

@@ -1,34 +0,0 @@
import sys
from pathlib import Path
from granian.constants import Interfaces
from granian.log import LogLevels
from granian.server import Server
from settings import PORT
from utils.logger import root_logger as logger
if __name__ == "__main__":
logger.info("started")
try:
granian_instance = Server(
"main:app",
address="0.0.0.0",
port=PORT,
interface=Interfaces.ASGI,
workers=1,
websockets=False,
log_level=LogLevels.debug,
backlog=2048,
)
if "dev" in sys.argv:
logger.info("dev mode, building ssl context")
granian_instance.build_ssl_context(cert=Path("localhost.pem"), key=Path("localhost-key.pem"), password=None)
granian_instance.serve()
except Exception as error:
logger.error(error, exc_info=True)
raise
finally:
logger.info("stopped")

View File

@@ -53,3 +53,66 @@ async def notify_follower(follower: dict, author_id: int, action: str = "follow"
except Exception as e:
# Log the error and re-raise it
logger.error(f"Failed to publish to channel {channel_name}: {e}")
async def notify_draft(draft_data, action: str = "publish"):
"""
Отправляет уведомление о публикации или обновлении черновика.
Функция гарантирует, что данные черновика сериализуются корректно, включая
связанные атрибуты (topics, authors).
Args:
draft_data (dict): Словарь с данными черновика. Должен содержать минимум id и title
action (str, optional): Действие ("publish", "update"). По умолчанию "publish"
Returns:
None
Examples:
>>> draft = {"id": 1, "title": "Тестовый черновик", "slug": "test-draft"}
>>> await notify_draft(draft, "publish")
"""
channel_name = "draft"
try:
# Убеждаемся, что все необходимые данные присутствуют
# и объект не требует доступа к отсоединенным атрибутам
if isinstance(draft_data, dict):
draft_payload = draft_data
else:
# Если это ORM объект, преобразуем его в словарь с нужными атрибутами
draft_payload = {
"id": getattr(draft_data, "id", None),
"slug": getattr(draft_data, "slug", None),
"title": getattr(draft_data, "title", None),
"subtitle": getattr(draft_data, "subtitle", None),
"media": getattr(draft_data, "media", None),
"created_at": getattr(draft_data, "created_at", None),
"updated_at": getattr(draft_data, "updated_at", None)
}
# Если переданы связанные атрибуты, добавим их
if hasattr(draft_data, "topics") and draft_data.topics is not None:
draft_payload["topics"] = [
{"id": t.id, "name": t.name, "slug": t.slug}
for t in draft_data.topics
]
if hasattr(draft_data, "authors") and draft_data.authors is not None:
draft_payload["authors"] = [
{"id": a.id, "name": a.name, "slug": a.slug, "pic": getattr(a, "pic", None)}
for a in draft_data.authors
]
data = {"payload": draft_payload, "action": action}
# Сохраняем уведомление
save_notification(action, channel_name, data.get("payload"))
# Публикуем в Redis
json_data = orjson.dumps(data)
if json_data:
await redis.publish(channel_name, json_data)
except Exception as e:
logger.error(f"Failed to publish to channel {channel_name}: {e}")

View File

@@ -1,14 +1,15 @@
from asyncio.log import logger
import httpx
from ariadne import MutationType, QueryType
from ariadne import MutationType, ObjectType, QueryType
from services.db import create_table_if_not_exists, local_session
from settings import AUTH_URL
query = QueryType()
mutation = MutationType()
resolvers = [query, mutation]
type_draft = ObjectType("Draft")
resolvers = [query, mutation, type_draft]
async def request_graphql_data(gql, url=AUTH_URL, headers=None):
@@ -28,12 +29,19 @@ async def request_graphql_data(gql, url=AUTH_URL, headers=None):
async with httpx.AsyncClient() as client:
response = await client.post(url, json=gql, headers=headers)
if response.status_code == 200:
# Check if the response has content before parsing
if response.content and len(response.content.strip()) > 0:
try:
data = response.json()
errors = data.get("errors")
if errors:
logger.error(f"{url} response: {data}")
else:
return data
except Exception as json_err:
logger.error(f"JSON decode error: {json_err}, Response content: {response.text[:100]}")
else:
logger.error(f"{url}: Response is empty")
else:
logger.error(f"{url}: {response.status_code} {response.text}")
except Exception as _e:

View File

@@ -4,24 +4,35 @@ import logging
import os
import httpx
import time
import random
from collections import defaultdict
from datetime import datetime, timedelta
# Set up proper logging
logger = logging.getLogger("search")
logger.setLevel(logging.INFO) # Change to INFO to see more details
# Disable noise HTTP client logging
logging.getLogger("httpx").setLevel(logging.WARNING)
logging.getLogger("httpcore").setLevel(logging.WARNING)
# Configuration for search service
SEARCH_ENABLED = bool(os.environ.get("SEARCH_ENABLED", "true").lower() in ["true", "1", "yes"])
SEARCH_ENABLED = bool(
os.environ.get("SEARCH_ENABLED", "true").lower() in ["true", "1", "yes"]
)
TXTAI_SERVICE_URL = os.environ.get("TXTAI_SERVICE_URL", "none")
MAX_BATCH_SIZE = int(os.environ.get("SEARCH_MAX_BATCH_SIZE", "25"))
# Search cache configuration
SEARCH_CACHE_ENABLED = bool(os.environ.get("SEARCH_CACHE_ENABLED", "true").lower() in ["true", "1", "yes"])
SEARCH_CACHE_TTL_SECONDS = int(os.environ.get("SEARCH_CACHE_TTL_SECONDS", "900")) # Default: 15 minutes
SEARCH_MIN_SCORE = float(os.environ.get("SEARCH_MIN_SCORE", "0.1"))
SEARCH_CACHE_ENABLED = bool(
os.environ.get("SEARCH_CACHE_ENABLED", "true").lower() in ["true", "1", "yes"]
)
SEARCH_CACHE_TTL_SECONDS = int(
os.environ.get("SEARCH_CACHE_TTL_SECONDS", "300")
) # Default: 15 minutes
SEARCH_PREFETCH_SIZE = int(os.environ.get("SEARCH_PREFETCH_SIZE", "200"))
SEARCH_USE_REDIS = bool(os.environ.get("SEARCH_USE_REDIS", "true").lower() in ["true", "1", "yes"])
SEARCH_USE_REDIS = bool(
os.environ.get("SEARCH_USE_REDIS", "true").lower() in ["true", "1", "yes"]
)
search_offset = 0
@@ -29,11 +40,13 @@ search_offset = 0
if SEARCH_USE_REDIS:
try:
from services.redis import redis
logger.info("Redis client imported for search caching")
except ImportError:
logger.warning("Redis client import failed, falling back to memory cache")
SEARCH_USE_REDIS = False
class SearchCache:
"""Cache for search results to enable efficient pagination"""
@@ -54,9 +67,11 @@ class SearchCache:
await redis.set(
f"{self._redis_prefix}{normalized_query}",
serialized_results,
ex=self.ttl
ex=self.ttl,
)
logger.info(
f"Stored {len(results)} search results for query '{query}' in Redis"
)
logger.info(f"Stored {len(results)} search results for query '{query}' in Redis")
return True
except Exception as e:
logger.error(f"Error storing search results in Redis: {e}")
@@ -69,7 +84,9 @@ class SearchCache:
# Store results and update timestamp
self.cache[normalized_query] = results
self.last_accessed[normalized_query] = time.time()
logger.info(f"Cached {len(results)} search results for query '{query}' in memory")
logger.info(
f"Cached {len(results)} search results for query '{query}' in memory"
)
return True
async def get(self, query, limit=10, offset=0):
@@ -101,10 +118,14 @@ class SearchCache:
# Return paginated subset
end_idx = min(offset + limit, len(all_results))
if offset >= len(all_results):
logger.warning(f"Requested offset {offset} exceeds result count {len(all_results)}")
logger.warning(
f"Requested offset {offset} exceeds result count {len(all_results)}"
)
return []
logger.info(f"Cache hit for '{query}': serving {offset}:{end_idx} of {len(all_results)} results")
logger.info(
f"Cache hit for '{query}': serving {offset}:{end_idx} of {len(all_results)} results"
)
return all_results[offset:end_idx]
async def has_query(self, query):
@@ -155,7 +176,8 @@ class SearchCache:
now = time.time()
# First remove expired entries
expired_keys = [
key for key, last_access in self.last_accessed.items()
key
for key, last_access in self.last_accessed.items()
if now - last_access > self.ttl
]
@@ -180,6 +202,7 @@ class SearchCache:
del self.last_accessed[key]
logger.info(f"Removed {remove_count} oldest search cache entries")
class SearchService:
def __init__(self):
logger.info(f"Initializing search service with URL: {TXTAI_SERVICE_URL}")
@@ -195,8 +218,9 @@ class SearchService:
if SEARCH_CACHE_ENABLED:
cache_location = "Redis" if SEARCH_USE_REDIS else "Memory"
logger.info(f"Search caching enabled using {cache_location} cache with TTL={SEARCH_CACHE_TTL_SECONDS}s")
logger.info(f"Minimum score filter: {SEARCH_MIN_SCORE}, prefetch size: {SEARCH_PREFETCH_SIZE}")
logger.info(
f"Search caching enabled using {cache_location} cache with TTL={SEARCH_CACHE_TTL_SECONDS}s"
)
async def info(self):
"""Return information about search service"""
@@ -216,7 +240,6 @@ class SearchService:
"""Check if service is available"""
return self.available
async def verify_docs(self, doc_ids):
"""Verify which documents exist in the search index across all content types"""
if not self.available:
@@ -227,7 +250,7 @@ class SearchService:
response = await self.client.post(
"/verify-docs",
json={"doc_ids": doc_ids},
timeout=60.0 # Longer timeout for potentially large ID lists
timeout=60.0, # Longer timeout for potentially large ID lists
)
response.raise_for_status()
result = response.json()
@@ -245,8 +268,12 @@ class SearchService:
titles_missing_count = len(titles_missing)
total_missing_count = len(all_missing)
logger.info(f"Document verification complete: {bodies_missing_count} bodies missing, {titles_missing_count} titles missing")
logger.info(f"Total unique missing documents: {total_missing_count} out of {len(doc_ids)} total")
logger.info(
f"Document verification complete: {bodies_missing_count} bodies missing, {titles_missing_count} titles missing"
)
logger.info(
f"Total unique missing documents: {total_missing_count} out of {len(doc_ids)} total"
)
# Return in a backwards-compatible format plus the detailed breakdown
return {
@@ -255,14 +282,13 @@ class SearchService:
"bodies_missing": list(bodies_missing),
"titles_missing": list(titles_missing),
"bodies_missing_count": bodies_missing_count,
"titles_missing_count": titles_missing_count
}
"titles_missing_count": titles_missing_count,
},
}
except Exception as e:
logger.error(f"Document verification error: {e}")
return {"status": "error", "message": str(e)}
def index(self, shout):
"""Index a single document"""
if not self.available:
@@ -281,40 +307,37 @@ class SearchService:
indexing_tasks = []
# 1. Index title if available
if hasattr(shout, 'title') and shout.title and isinstance(shout.title, str):
title_doc = {
"id": str(shout.id),
"title": shout.title.strip()
}
if hasattr(shout, "title") and shout.title and isinstance(shout.title, str):
title_doc = {"id": str(shout.id), "title": shout.title.strip()}
indexing_tasks.append(
self.index_client.post("/index-title", json=title_doc)
)
# 2. Index body content (subtitle, lead, body)
body_text_parts = []
for field_name in ['subtitle', 'lead', 'body']:
for field_name in ["subtitle", "lead", "body"]:
field_value = getattr(shout, field_name, None)
if field_value and isinstance(field_value, str) and field_value.strip():
body_text_parts.append(field_value.strip())
# Process media content if available
media = getattr(shout, 'media', None)
media = getattr(shout, "media", None)
if media:
if isinstance(media, str):
try:
media_json = json.loads(media)
if isinstance(media_json, dict):
if 'title' in media_json:
body_text_parts.append(media_json['title'])
if 'body' in media_json:
body_text_parts.append(media_json['body'])
if "title" in media_json:
body_text_parts.append(media_json["title"])
if "body" in media_json:
body_text_parts.append(media_json["body"])
except json.JSONDecodeError:
body_text_parts.append(media)
elif isinstance(media, dict):
if 'title' in media:
body_text_parts.append(media['title'])
if 'body' in media:
body_text_parts.append(media['body'])
if "title" in media:
body_text_parts.append(media["title"])
if "body" in media:
body_text_parts.append(media["body"])
if body_text_parts:
body_text = " ".join(body_text_parts)
@@ -323,57 +346,58 @@ class SearchService:
if len(body_text) > MAX_TEXT_LENGTH:
body_text = body_text[:MAX_TEXT_LENGTH]
body_doc = {
"id": str(shout.id),
"body": body_text
}
body_doc = {"id": str(shout.id), "body": body_text}
indexing_tasks.append(
self.index_client.post("/index-body", json=body_doc)
)
# 3. Index authors
authors = getattr(shout, 'authors', [])
authors = getattr(shout, "authors", [])
for author in authors:
author_id = str(getattr(author, 'id', 0))
if not author_id or author_id == '0':
author_id = str(getattr(author, "id", 0))
if not author_id or author_id == "0":
continue
name = getattr(author, 'name', '')
name = getattr(author, "name", "")
# Combine bio and about fields
bio_parts = []
bio = getattr(author, 'bio', '')
bio = getattr(author, "bio", "")
if bio and isinstance(bio, str):
bio_parts.append(bio.strip())
about = getattr(author, 'about', '')
about = getattr(author, "about", "")
if about and isinstance(about, str):
bio_parts.append(about.strip())
combined_bio = " ".join(bio_parts)
if name:
author_doc = {
"id": author_id,
"name": name,
"bio": combined_bio
}
author_doc = {"id": author_id, "name": name, "bio": combined_bio}
indexing_tasks.append(
self.index_client.post("/index-author", json=author_doc)
)
# Run all indexing tasks in parallel
if indexing_tasks:
responses = await asyncio.gather(*indexing_tasks, return_exceptions=True)
responses = await asyncio.gather(
*indexing_tasks, return_exceptions=True
)
# Check for errors in responses
for i, response in enumerate(responses):
if isinstance(response, Exception):
logger.error(f"Error in indexing task {i}: {response}")
elif hasattr(response, 'status_code') and response.status_code >= 400:
logger.error(f"Error response in indexing task {i}: {response.status_code}, {await response.text()}")
elif (
hasattr(response, "status_code") and response.status_code >= 400
):
logger.error(
f"Error response in indexing task {i}: {response.status_code}, {await response.text()}"
)
logger.info(f"Document {shout.id} indexed across {len(indexing_tasks)} endpoints")
logger.info(
f"Document {shout.id} indexed across {len(indexing_tasks)} endpoints"
)
else:
logger.warning(f"No content to index for shout {shout.id}")
@@ -383,7 +407,9 @@ class SearchService:
async def bulk_index(self, shouts):
"""Index multiple documents across three separate endpoints"""
if not self.available or not shouts:
logger.warning(f"Bulk indexing skipped: available={self.available}, shouts_count={len(shouts) if shouts else 0}")
logger.warning(
f"Bulk indexing skipped: available={self.available}, shouts_count={len(shouts) if shouts else 0}"
)
return
start_time = time.time()
@@ -399,37 +425,44 @@ class SearchService:
for shout in shouts:
try:
# 1. Process title documents
if hasattr(shout, 'title') and shout.title and isinstance(shout.title, str):
title_docs.append({
"id": str(shout.id),
"title": shout.title.strip()
})
if (
hasattr(shout, "title")
and shout.title
and isinstance(shout.title, str)
):
title_docs.append(
{"id": str(shout.id), "title": shout.title.strip()}
)
# 2. Process body documents (subtitle, lead, body)
body_text_parts = []
for field_name in ['subtitle', 'lead', 'body']:
for field_name in ["subtitle", "lead", "body"]:
field_value = getattr(shout, field_name, None)
if field_value and isinstance(field_value, str) and field_value.strip():
if (
field_value
and isinstance(field_value, str)
and field_value.strip()
):
body_text_parts.append(field_value.strip())
# Process media content if available
media = getattr(shout, 'media', None)
media = getattr(shout, "media", None)
if media:
if isinstance(media, str):
try:
media_json = json.loads(media)
if isinstance(media_json, dict):
if 'title' in media_json:
body_text_parts.append(media_json['title'])
if 'body' in media_json:
body_text_parts.append(media_json['body'])
if "title" in media_json:
body_text_parts.append(media_json["title"])
if "body" in media_json:
body_text_parts.append(media_json["body"])
except json.JSONDecodeError:
body_text_parts.append(media)
elif isinstance(media, dict):
if 'title' in media:
body_text_parts.append(media['title'])
if 'body' in media:
body_text_parts.append(media['body'])
if "title" in media:
body_text_parts.append(media["title"])
if "body" in media:
body_text_parts.append(media["body"])
# Only add body document if we have body text
if body_text_parts:
@@ -439,31 +472,28 @@ class SearchService:
if len(body_text) > MAX_TEXT_LENGTH:
body_text = body_text[:MAX_TEXT_LENGTH]
body_docs.append({
"id": str(shout.id),
"body": body_text
})
body_docs.append({"id": str(shout.id), "body": body_text})
# 3. Process authors if available
authors = getattr(shout, 'authors', [])
authors = getattr(shout, "authors", [])
for author in authors:
author_id = str(getattr(author, 'id', 0))
if not author_id or author_id == '0':
author_id = str(getattr(author, "id", 0))
if not author_id or author_id == "0":
continue
# Skip if we've already processed this author
if author_id in author_docs:
continue
name = getattr(author, 'name', '')
name = getattr(author, "name", "")
# Combine bio and about fields
bio_parts = []
bio = getattr(author, 'bio', '')
bio = getattr(author, "bio", "")
if bio and isinstance(bio, str):
bio_parts.append(bio.strip())
about = getattr(author, 'about', '')
about = getattr(author, "about", "")
if about and isinstance(about, str):
bio_parts.append(about.strip())
@@ -474,21 +504,26 @@ class SearchService:
author_docs[author_id] = {
"id": author_id,
"name": name,
"bio": combined_bio
"bio": combined_bio,
}
except Exception as e:
logger.error(f"Error processing shout {getattr(shout, 'id', 'unknown')} for indexing: {e}")
logger.error(
f"Error processing shout {getattr(shout, 'id', 'unknown')} for indexing: {e}"
)
total_skipped += 1
# Convert author dict to list
author_docs_list = list(author_docs.values())
# Log indexing started message
logger.info("indexing started...")
# Process each endpoint in parallel
indexing_tasks = [
self._index_endpoint(title_docs, "/bulk-index-titles", "title"),
self._index_endpoint(body_docs, "/bulk-index-bodies", "body"),
self._index_endpoint(author_docs_list, "/bulk-index-authors", "author")
self._index_endpoint(author_docs_list, "/bulk-index-authors", "author"),
]
await asyncio.gather(*indexing_tasks)
@@ -509,19 +544,27 @@ class SearchService:
logger.info(f"Indexing {len(documents)} {doc_type} documents")
# Categorize documents by size
small_docs, medium_docs, large_docs = self._categorize_by_size(documents, doc_type)
small_docs, medium_docs, large_docs = self._categorize_by_size(
documents, doc_type
)
# Process each category with appropriate batch sizes
batch_sizes = {
"small": min(MAX_BATCH_SIZE, 15),
"medium": min(MAX_BATCH_SIZE, 10),
"large": min(MAX_BATCH_SIZE, 3)
"large": min(MAX_BATCH_SIZE, 3),
}
for category, docs in [("small", small_docs), ("medium", medium_docs), ("large", large_docs)]:
for category, docs in [
("small", small_docs),
("medium", medium_docs),
("large", large_docs),
]:
if docs:
batch_size = batch_sizes[category]
await self._process_batches(docs, batch_size, endpoint, f"{doc_type}-{category}")
await self._process_batches(
docs, batch_size, endpoint, f"{doc_type}-{category}"
)
def _categorize_by_size(self, documents, doc_type):
"""Categorize documents by size for optimized batch processing"""
@@ -548,13 +591,15 @@ class SearchService:
else:
small_docs.append(doc)
logger.info(f"{doc_type.capitalize()} documents categorized: {len(small_docs)} small, {len(medium_docs)} medium, {len(large_docs)} large")
logger.info(
f"{doc_type.capitalize()} documents categorized: {len(small_docs)} small, {len(medium_docs)} medium, {len(large_docs)} large"
)
return small_docs, medium_docs, large_docs
async def _process_batches(self, documents, batch_size, endpoint, batch_prefix):
"""Process document batches with retry logic"""
for i in range(0, len(documents), batch_size):
batch = documents[i:i+batch_size]
batch = documents[i : i + batch_size]
batch_id = f"{batch_prefix}-{i//batch_size + 1}"
retry_count = 0
@@ -563,134 +608,137 @@ class SearchService:
while not success and retry_count < max_retries:
try:
logger.info(f"Sending batch {batch_id} ({len(batch)} docs) to {endpoint}")
response = await self.index_client.post(
endpoint,
json=batch,
timeout=90.0
endpoint, json=batch, timeout=90.0
)
if response.status_code == 422:
error_detail = response.json()
logger.error(f"Validation error from search service for batch {batch_id}: {self._truncate_error_detail(error_detail)}")
logger.error(
f"Validation error from search service for batch {batch_id}: {self._truncate_error_detail(error_detail)}"
)
break
response.raise_for_status()
success = True
logger.info(f"Successfully indexed batch {batch_id}")
except Exception as e:
retry_count += 1
if retry_count >= max_retries:
if len(batch) > 1:
mid = len(batch) // 2
logger.warning(f"Splitting batch {batch_id} into smaller batches for retry")
await self._process_batches(batch[:mid], batch_size // 2, endpoint, f"{batch_prefix}-{i//batch_size}-A")
await self._process_batches(batch[mid:], batch_size // 2, endpoint, f"{batch_prefix}-{i//batch_size}-B")
await self._process_batches(
batch[:mid],
batch_size // 2,
endpoint,
f"{batch_prefix}-{i//batch_size}-A",
)
await self._process_batches(
batch[mid:],
batch_size // 2,
endpoint,
f"{batch_prefix}-{i//batch_size}-B",
)
else:
logger.error(f"Failed to index single document in batch {batch_id} after {max_retries} attempts: {str(e)}")
logger.error(
f"Failed to index single document in batch {batch_id} after {max_retries} attempts: {str(e)}"
)
break
wait_time = (2 ** retry_count) + (random.random() * 0.5)
logger.warning(f"Retrying batch {batch_id} in {wait_time:.1f}s... (attempt {retry_count+1}/{max_retries})")
wait_time = (2**retry_count) + (random.random() * 0.5)
await asyncio.sleep(wait_time)
def _truncate_error_detail(self, error_detail):
"""Truncate error details for logging"""
truncated_detail = error_detail.copy() if isinstance(error_detail, dict) else error_detail
truncated_detail = (
error_detail.copy() if isinstance(error_detail, dict) else error_detail
)
if isinstance(truncated_detail, dict) and 'detail' in truncated_detail and isinstance(truncated_detail['detail'], list):
for i, item in enumerate(truncated_detail['detail']):
if isinstance(item, dict) and 'input' in item:
if isinstance(item['input'], dict) and any(k in item['input'] for k in ['documents', 'text']):
if 'documents' in item['input'] and isinstance(item['input']['documents'], list):
for j, doc in enumerate(item['input']['documents']):
if 'text' in doc and isinstance(doc['text'], str) and len(doc['text']) > 100:
item['input']['documents'][j]['text'] = f"{doc['text'][:100]}... [truncated, total {len(doc['text'])} chars]"
if (
isinstance(truncated_detail, dict)
and "detail" in truncated_detail
and isinstance(truncated_detail["detail"], list)
):
for i, item in enumerate(truncated_detail["detail"]):
if isinstance(item, dict) and "input" in item:
if isinstance(item["input"], dict) and any(
k in item["input"] for k in ["documents", "text"]
):
if "documents" in item["input"] and isinstance(
item["input"]["documents"], list
):
for j, doc in enumerate(item["input"]["documents"]):
if (
"text" in doc
and isinstance(doc["text"], str)
and len(doc["text"]) > 100
):
item["input"]["documents"][j][
"text"
] = f"{doc['text'][:100]}... [truncated, total {len(doc['text'])} chars]"
if 'text' in item['input'] and isinstance(item['input']['text'], str) and len(item['input']['text']) > 100:
item['input']['text'] = f"{item['input']['text'][:100]}... [truncated, total {len(item['input']['text'])} chars]"
if (
"text" in item["input"]
and isinstance(item["input"]["text"], str)
and len(item["input"]["text"]) > 100
):
item["input"][
"text"
] = f"{item['input']['text'][:100]}... [truncated, total {len(item['input']['text'])} chars]"
return truncated_detail
#*******************
# Specialized search methods for titles, bodies, and authors
async def search_titles(self, text, limit=10, offset=0):
"""Search only in titles using the specialized endpoint"""
if not self.available or not text.strip():
async def search(self, text, limit, offset):
"""Search documents"""
if not self.available:
return []
cache_key = f"title:{text}"
if not isinstance(text, str) or not text.strip():
return []
# Try cache first if enabled
# Check if we can serve from cache
if SEARCH_CACHE_ENABLED:
if await self.cache.has_query(cache_key):
return await self.cache.get(cache_key, limit, offset)
has_cache = await self.cache.has_query(text)
if has_cache:
cached_results = await self.cache.get(text, limit, offset)
if cached_results is not None:
return cached_results
# Not in cache or cache disabled, perform new search
try:
logger.info(f"Searching titles for: '{text}' (limit={limit}, offset={offset})")
search_limit = limit
if SEARCH_CACHE_ENABLED:
search_limit = SEARCH_PREFETCH_SIZE
else:
search_limit = limit
logger.info(f"Searching for: '{text}' (limit={limit}, offset={offset}, search_limit={search_limit})")
response = await self.client.post(
"/search-title",
json={"text": text, "limit": limit + offset}
"/search-combined",
json={"text": text, "limit": search_limit},
)
response.raise_for_status()
result = response.json()
title_results = result.get("results", [])
formatted_results = result.get("results", [])
# Apply score filtering if needed
if SEARCH_MIN_SCORE > 0:
title_results = [r for r in title_results if r.get("score", 0) >= SEARCH_MIN_SCORE]
# filter out nonnumeric IDs
valid_results = [r for r in formatted_results if r.get("id", "").isdigit()]
if len(valid_results) != len(formatted_results):
formatted_results = valid_results
if len(valid_results) != len(formatted_results):
formatted_results = valid_results
# Store in cache if enabled
if SEARCH_CACHE_ENABLED:
await self.cache.store(cache_key, title_results)
# Apply offset/limit (API might not support it directly)
return title_results[offset:offset+limit]
# Store the full prefetch batch, then page it
await self.cache.store(text, formatted_results)
return await self.cache.get(text, limit, offset)
return formatted_results
except Exception as e:
logger.error(f"Error searching titles for '{text}': {e}")
return []
async def search_bodies(self, text, limit=10, offset=0):
"""Search only in document bodies using the specialized endpoint"""
if not self.available or not text.strip():
return []
cache_key = f"body:{text}"
# Try cache first if enabled
if SEARCH_CACHE_ENABLED:
if await self.cache.has_query(cache_key):
return await self.cache.get(cache_key, limit, offset)
try:
logger.info(f"Searching bodies for: '{text}' (limit={limit}, offset={offset})")
response = await self.client.post(
"/search-body",
json={"text": text, "limit": limit + offset}
)
response.raise_for_status()
result = response.json()
body_results = result.get("results", [])
# Apply score filtering if needed
if SEARCH_MIN_SCORE > 0:
body_results = [r for r in body_results if r.get("score", 0) >= SEARCH_MIN_SCORE]
# Store in cache if enabled
if SEARCH_CACHE_ENABLED:
await self.cache.store(cache_key, body_results)
# Apply offset/limit
return body_results[offset:offset+limit]
except Exception as e:
logger.error(f"Error searching bodies for '{text}': {e}")
logger.error(f"Search error for '{text}': {e}", exc_info=True)
return []
async def search_authors(self, text, limit=10, offset=0):
@@ -700,45 +748,50 @@ class SearchService:
cache_key = f"author:{text}"
# Try cache first if enabled
# Check if we can serve from cache
if SEARCH_CACHE_ENABLED:
if await self.cache.has_query(cache_key):
return await self.cache.get(cache_key, limit, offset)
has_cache = await self.cache.has_query(cache_key)
if has_cache:
cached_results = await self.cache.get(cache_key, limit, offset)
if cached_results is not None:
return cached_results
# Not in cache or cache disabled, perform new search
try:
logger.info(f"Searching authors for: '{text}' (limit={limit}, offset={offset})")
search_limit = limit
if SEARCH_CACHE_ENABLED:
search_limit = SEARCH_PREFETCH_SIZE
else:
search_limit = limit
logger.info(
f"Searching authors for: '{text}' (limit={limit}, offset={offset}, search_limit={search_limit})"
)
response = await self.client.post(
"/search-author",
json={"text": text, "limit": limit + offset}
"/search-author", json={"text": text, "limit": search_limit}
)
response.raise_for_status()
result = response.json()
author_results = result.get("results", [])
# Apply score filtering if needed
if SEARCH_MIN_SCORE > 0:
author_results = [r for r in author_results if r.get("score", 0) >= SEARCH_MIN_SCORE]
# Filter out any invalid results if necessary
valid_results = [r for r in author_results if r.get("id", "").isdigit()]
if len(valid_results) != len(author_results):
author_results = valid_results
# Store in cache if enabled
if SEARCH_CACHE_ENABLED:
# Store the full prefetch batch, then page it
await self.cache.store(cache_key, author_results)
return await self.cache.get(cache_key, limit, offset)
# Apply offset/limit
return author_results[offset:offset+limit]
return author_results[offset : offset + limit]
except Exception as e:
logger.error(f"Error searching authors for '{text}': {e}")
return []
async def search(self, text, limit, offset):
"""
Legacy search method that searches only bodies for backward compatibility.
Consider using the specialized search methods instead.
"""
logger.warning("Using deprecated search() method - consider using search_bodies(), search_titles(), or search_authors()")
return await self.search_bodies(text, limit, offset)
async def check_index_status(self):
"""Get detailed statistics about the search index health"""
if not self.available:
@@ -750,7 +803,9 @@ class SearchService:
result = response.json()
if result.get("consistency", {}).get("status") != "ok":
null_count = result.get("consistency", {}).get("null_embeddings_count", 0)
null_count = result.get("consistency", {}).get(
"null_embeddings_count", 0
)
if null_count > 0:
logger.warning(f"Found {null_count} documents with NULL embeddings")
@@ -759,22 +814,19 @@ class SearchService:
logger.error(f"Failed to check index status: {e}")
return {"status": "error", "message": str(e)}
# Create the search service singleton
search_service = SearchService()
# API-compatible function to perform a search
async def search_title_text(text: str, limit: int = 10, offset: int = 0):
"""Search titles API helper function"""
if search_service.available:
return await search_service.search_titles(text, limit, offset)
return []
async def search_body_text(text: str, limit: int = 10, offset: int = 0):
"""Search bodies API helper function"""
async def search_text(text: str, limit: int = 200, offset: int = 0):
payload = []
if search_service.available:
return await search_service.search_bodies(text, limit, offset)
return []
payload = await search_service.search(text, limit, offset)
return payload
async def search_author_text(text: str, limit: int = 10, offset: int = 0):
"""Search authors API helper function"""
@@ -782,31 +834,18 @@ async def search_author_text(text: str, limit: int = 10, offset: int = 0):
return await search_service.search_authors(text, limit, offset)
return []
async def get_title_search_count(text: str):
async def get_search_count(text: str):
"""Get count of title search results"""
if not search_service.available:
return 0
if SEARCH_CACHE_ENABLED:
cache_key = f"title:{text}"
if await search_service.cache.has_query(cache_key):
return await search_service.cache.get_total_count(cache_key)
if SEARCH_CACHE_ENABLED and await search_service.cache.has_query(text):
return await search_service.cache.get_total_count(text)
# If not found in cache, fetch from endpoint
return len(await search_title_text(text, SEARCH_PREFETCH_SIZE, 0))
return len(await search_text(text, SEARCH_PREFETCH_SIZE, 0))
async def get_body_search_count(text: str):
"""Get count of body search results"""
if not search_service.available:
return 0
if SEARCH_CACHE_ENABLED:
cache_key = f"body:{text}"
if await search_service.cache.has_query(cache_key):
return await search_service.cache.get_total_count(cache_key)
# If not found in cache, fetch from endpoint
return len(await search_body_text(text, SEARCH_PREFETCH_SIZE, 0))
async def get_author_search_count(text: str):
"""Get count of author search results"""
@@ -821,6 +860,7 @@ async def get_author_search_count(text: str):
# If not found in cache, fetch from endpoint
return len(await search_author_text(text, SEARCH_PREFETCH_SIZE, 0))
async def initialize_search_index(shouts_data):
"""Initialize search index with existing data during application startup"""
if not SEARCH_ENABLED:
@@ -838,35 +878,58 @@ async def initialize_search_index(shouts_data):
index_status = await search_service.check_index_status()
if index_status.get("status") == "inconsistent":
problem_ids = index_status.get("consistency", {}).get("null_embeddings_sample", [])
problem_ids = index_status.get("consistency", {}).get(
"null_embeddings_sample", []
)
if problem_ids:
problem_docs = [shout for shout in shouts_data if str(shout.id) in problem_ids]
problem_docs = [
shout for shout in shouts_data if str(shout.id) in problem_ids
]
if problem_docs:
await search_service.bulk_index(problem_docs)
db_ids = [str(shout.id) for shout in shouts_data]
# Only consider shouts with body content for body verification
def has_body_content(shout):
for field in ["subtitle", "lead", "body"]:
if (
getattr(shout, field, None)
and isinstance(getattr(shout, field, None), str)
and getattr(shout, field).strip()
):
return True
media = getattr(shout, "media", None)
if media:
if isinstance(media, str):
try:
numeric_ids = [int(sid) for sid in db_ids if sid.isdigit()]
if numeric_ids:
min_id = min(numeric_ids)
max_id = max(numeric_ids)
id_range = max_id - min_id + 1
except Exception as e:
pass
media_json = json.loads(media)
if isinstance(media_json, dict) and (
media_json.get("title") or media_json.get("body")
):
return True
except Exception:
return True
elif isinstance(media, dict):
if media.get("title") or media.get("body"):
return True
return False
shouts_with_body = [shout for shout in shouts_data if has_body_content(shout)]
body_ids = [str(shout.id) for shout in shouts_with_body]
if abs(indexed_doc_count - len(shouts_data)) > 10:
doc_ids = [str(shout.id) for shout in shouts_data]
verification = await search_service.verify_docs(doc_ids)
if verification.get("status") == "error":
return
missing_ids = verification.get("missing", [])
# Only reindex missing docs that actually have body content
missing_ids = [
mid for mid in verification.get("missing", []) if mid in body_ids
]
if missing_ids:
missing_docs = [shout for shout in shouts_data if str(shout.id) in missing_ids]
missing_docs = [
shout for shout in shouts_with_body if str(shout.id) in missing_ids
]
await search_service.bulk_index(missing_docs)
else:
pass
@@ -874,14 +937,14 @@ async def initialize_search_index(shouts_data):
try:
test_query = "test"
# Use body search since that's most likely to return results
test_results = await search_body_text(test_query, 5)
test_results = await search_text(test_query, 5)
if test_results:
categories = set()
for result in test_results:
result_id = result.get("id")
matching_shouts = [s for s in shouts_data if str(s.id) == result_id]
if matching_shouts and hasattr(matching_shouts[0], 'category'):
categories.add(getattr(matching_shouts[0], 'category', 'unknown'))
if matching_shouts and hasattr(matching_shouts[0], "category"):
categories.add(getattr(matching_shouts[0], "category", "unknown"))
except Exception as e:
pass

1
utils/__init__.py Normal file
View File

@@ -0,0 +1 @@

38
utils/html_wrapper.py Normal file
View File

@@ -0,0 +1,38 @@
"""
Модуль для обработки HTML-фрагментов
"""
def wrap_html_fragment(fragment: str) -> str:
"""
Оборачивает HTML-фрагмент в полную HTML-структуру для корректной обработки.
Args:
fragment: HTML-фрагмент для обработки
Returns:
str: Полный HTML-документ
Example:
>>> wrap_html_fragment("<p>Текст параграфа</p>")
'<!DOCTYPE html><html><head><meta charset="utf-8"></head><body><p>Текст параграфа</p></body></html>'
"""
if not fragment or not fragment.strip():
return fragment
# Проверяем, является ли контент полным HTML-документом
is_full_html = fragment.strip().startswith('<!DOCTYPE') or fragment.strip().startswith('<html')
# Если это фрагмент, оборачиваем его в полный HTML-документ
if not is_full_html:
return f"""<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title></title>
</head>
<body>
{fragment}
</body>
</html>"""
return fragment