123 Commits

Author SHA1 Message Date
Stepan Vladovskiy
2bebfbd4df debug: force rebuild core stag branch
All checks were successful
Deploy on push / deploy (push) Successful in 44s
2025-05-19 15:45:13 -03:00
Stepan Vladovskiy
f19248184a debug: without ersions for starlette and ariadne
All checks were successful
Deploy on push / deploy (push) Successful in 1m5s
2025-05-18 22:48:34 +00:00
Stepan Vladovskiy
7df9361daa debug: Dockerfile with build-essential
Some checks failed
Deploy on push / deploy (push) Failing after 1m36s
2025-05-18 22:43:20 +00:00
Stepan Vladovskiy
e38a1c1338 with vers for starlette, ariadne, granian
Some checks failed
Deploy on push / deploy (push) Failing after 50s
2025-05-18 22:36:47 +00:00
Stepan Vladovskiy
1281157d93 feat: check before parse graphQL
All checks were successful
Deploy on push / deploy (push) Successful in 44s
2025-05-14 14:42:40 -03:00
Stepan Vladovskiy
0018749905 Merge branch 'dev' into staging
All checks were successful
Deploy on push / deploy (push) Successful in 1m28s
2025-05-14 14:33:52 -03:00
a6b3b21894 draft-topics-fix2
All checks were successful
Deploy on push / deploy (push) Successful in 45s
2025-05-07 10:37:18 +03:00
51de649686 draft-topics-fix
All checks were successful
Deploy on push / deploy (push) Successful in 47s
2025-05-07 10:22:30 +03:00
2b7d5a25b5 unpublish-fix7
All checks were successful
Deploy on push / deploy (push) Successful in 46s
2025-05-03 11:52:14 +03:00
32cb810f51 unpublish-fix7 2025-05-03 11:52:10 +03:00
d2a8c23076 unpublish-fix5
All checks were successful
Deploy on push / deploy (push) Successful in 45s
2025-05-03 11:47:35 +03:00
96afda77a6 unpublish-fix5
All checks were successful
Deploy on push / deploy (push) Successful in 47s
2025-05-03 11:35:03 +03:00
785548d055 cache-revalidation-fix
All checks were successful
Deploy on push / deploy (push) Successful in 47s
2025-05-03 11:11:14 +03:00
d6202561a9 unpublish-fix4
All checks were successful
Deploy on push / deploy (push) Successful in 47s
2025-05-03 11:07:03 +03:00
3fbd2e677a unpublish-fix3
All checks were successful
Deploy on push / deploy (push) Successful in 47s
2025-05-03 11:00:19 +03:00
4f1eab513a unpublish-fix2
All checks were successful
Deploy on push / deploy (push) Successful in 46s
2025-05-03 10:57:55 +03:00
44852a1553 delete-draft-fix2 2025-05-03 10:56:34 +03:00
58ec60262b unpublish,delete-draft-fix
All checks were successful
Deploy on push / deploy (push) Successful in 1m23s
2025-05-03 10:53:40 +03:00
Stepan Vladovskiy
c344fcee2d refactoring(search.py): logs for search-combine and search-authors are equal
All checks were successful
Deploy on push / deploy (push) Successful in 6s
2025-05-02 18:28:06 -03:00
Stepan Vladovskiy
a1a61a6731 feat: follow same logic as search shouts for authors. Store them to Reddis cache + pagination
All checks were successful
Deploy on push / deploy (push) Successful in 41s
2025-05-02 18:17:05 -03:00
Stepan Vladovskiy
8d6ad2c84f refactor(author.py): remove verbose loging in resolver level 2025-05-02 18:04:10 -03:00
Stepan Vladovskiy
beba1992e9 fix(__init.py__): clean name of resolver for authors search loading
All checks were successful
Deploy on push / deploy (push) Successful in 39s
2025-04-29 19:49:47 -03:00
Stepan Vladovskiy
b0296d7747 fix(__init.py__): added created resolver in resolvers lists
All checks were successful
Deploy on push / deploy (push) Successful in 41s
2025-04-29 19:40:20 -03:00
Stepan Vladovskiy
98e3dff35e fix(author.py): resolver load_authors_search error fix
All checks were successful
Deploy on push / deploy (push) Successful in 40s
2025-04-29 18:00:38 -03:00
Stepan Vladovskiy
3782a9dffb fix(search.py, author.py): small fixes for start. logger import fails
All checks were successful
Deploy on push / deploy (push) Successful in 40s
2025-04-29 17:50:51 -03:00
Stepan Vladovskiy
93c00b3dd1 feat(author.py):addresolver for searching authors by text
All checks were successful
Deploy on push / deploy (push) Successful in 1m15s
2025-04-29 17:45:37 -03:00
5f3d90fc90 draft-publication-debug
All checks were successful
Deploy on push / deploy (push) Successful in 48s
2025-04-28 16:24:08 +03:00
f71fc7fde9 draft-publication-info
All checks were successful
Deploy on push / deploy (push) Successful in 46s
2025-04-28 11:10:18 +03:00
ed71405082 topic-commented-stat-fix4
All checks were successful
Deploy on push / deploy (push) Successful in 46s
2025-04-28 10:30:58 +03:00
79e1f15a2e topic-commented-stat-fix3
All checks were successful
Deploy on push / deploy (push) Successful in 47s
2025-04-28 10:30:04 +03:00
b17acae0af topic-commented-stat-fix2
All checks were successful
Deploy on push / deploy (push) Successful in 46s
2025-04-28 10:24:48 +03:00
d293819ad9 topic-commented-stat-fix
All checks were successful
Deploy on push / deploy (push) Successful in 48s
2025-04-28 10:13:29 +03:00
bcbfdd76e9 html wrap fix
All checks were successful
Deploy on push / deploy (push) Successful in 48s
2025-04-27 12:53:49 +03:00
b735bf8cab draft-creator-adding
All checks were successful
Deploy on push / deploy (push) Successful in 47s
2025-04-27 09:15:07 +03:00
20fd40df0e updated-by-auto-fix
All checks were successful
Deploy on push / deploy (push) Successful in 47s
2025-04-26 23:46:07 +03:00
bde3211a5f updated-by-auto
All checks were successful
Deploy on push / deploy (push) Successful in 49s
2025-04-26 17:07:02 +03:00
4cd8883d72 validhtmlfix 2025-04-26 17:02:55 +03:00
0939e91700 empty-body-fix
All checks were successful
Deploy on push / deploy (push) Successful in 45s
2025-04-26 16:19:33 +03:00
dfbdfba2f0 draft-create-fix5
All checks were successful
Deploy on push / deploy (push) Successful in 46s
2025-04-26 16:13:07 +03:00
b66e347c91 draft-create-fix
All checks were successful
Deploy on push / deploy (push) Successful in 45s
2025-04-26 16:03:41 +03:00
6d9513f1b2 reaction-by-fix4
All checks were successful
Deploy on push / deploy (push) Successful in 45s
2025-04-26 15:57:51 +03:00
af7fbd2fc9 reaction-by-fix3
All checks were successful
Deploy on push / deploy (push) Successful in 47s
2025-04-26 15:50:20 +03:00
631ad47fe8 reaction-by-fix2
All checks were successful
Deploy on push / deploy (push) Successful in 46s
2025-04-26 15:47:44 +03:00
3b3cc1c1d8 reaction-by-ашч
All checks were successful
Deploy on push / deploy (push) Successful in 35s
2025-04-26 15:42:15 +03:00
e4943f524c reaction-by-upgrade2
All checks were successful
Deploy on push / deploy (push) Successful in 36s
2025-04-26 15:35:31 +03:00
e7684c9c05 reaction-by-upgrade
All checks were successful
Deploy on push / deploy (push) Successful in 36s
2025-04-26 14:10:05 +03:00
bdae2abe25 drafts schema restore + publish/unpublish fixes
All checks were successful
Deploy on push / deploy (push) Successful in 32s
2025-04-26 13:11:12 +03:00
a310d59432 draft-resolvers
All checks were successful
Deploy on push / deploy (push) Successful in 46s
2025-04-26 11:45:16 +03:00
6b2ac09f74 unpublish-fix
All checks were successful
Deploy on push / deploy (push) Successful in 46s
2025-04-26 10:16:55 +03:00
Stepan Vladovskiy
fac43e5997 refact(search,reader): withput any kind of sorting
All checks were successful
Deploy on push / deploy (push) Successful in 42s
2025-04-24 21:00:41 -03:00
Stepan Vladovskiy
e7facf8d87 style(search.py): with indexing message
All checks were successful
Deploy on push / deploy (push) Successful in 42s
2025-04-24 18:45:00 -03:00
Stepan Vladovskiy
3062a2b7de refactor(search.py): with checking titles without bodies for not re indexing them every startup
All checks were successful
Deploy on push / deploy (push) Successful in 42s
2025-04-24 14:58:14 -03:00
Stepan Vladovskiy
c0406dbbf2 refac(search.py): without logger and rm dublicated def search-text
All checks were successful
Deploy on push / deploy (push) Successful in 44s
2025-04-24 14:18:14 -03:00
Stepan Vladovskiy
ab4610575f refactor(reader.py): to handle search combined
All checks were successful
Deploy on push / deploy (push) Successful in 44s
2025-04-24 13:56:38 -03:00
Stepan Vladovskiy
5425dbf832 refactor(search.py): simplify def search 2025-04-24 13:46:58 -03:00
Stepan Vladovskiy
a10db2d38a feat(search.py): combined search on shouts tittles and bodys 2025-04-24 13:35:36 -03:00
5024e963e3 notify-draft-hotfix
All checks were successful
Deploy on push / deploy (push) Successful in 47s
2025-04-24 12:12:48 +03:00
Stepan Vladovskiy
83e70856cd debug(server.py): i dont know why, but it is appears and i am rm it
All checks were successful
Deploy on push / deploy (push) Successful in 41s
2025-04-23 18:32:58 -03:00
Stepan Vladovskiy
11654dba68 feat: with three separate endpoints
All checks were successful
Deploy on push / deploy (push) Successful in 5s
2025-04-23 18:24:00 -03:00
Stepan Vladovskiy
ec9465ad40 merge dev
All checks were successful
Deploy on push / deploy (push) Successful in 46s
2025-04-20 19:24:59 -03:00
Stepan Vladovskiy
4d965fb27b feat(search.py): separate indexing of Shout Title, shout Body and Authors
All checks were successful
Deploy on push / deploy (push) Successful in 39s
2025-04-20 19:22:08 -03:00
aaa6022a53 draft-create-fix4
All checks were successful
Deploy on push / deploy (push) Successful in 46s
2025-04-16 14:17:59 +03:00
d6ada44c7f draft-create-fix3
All checks were successful
Deploy on push / deploy (push) Successful in 48s
2025-04-16 11:51:19 +03:00
243f836f0a draft-create-fix2
All checks were successful
Deploy on push / deploy (push) Successful in 46s
2025-04-16 11:48:47 +03:00
536c094e72 draft-create-fix
All checks were successful
Deploy on push / deploy (push) Successful in 48s
2025-04-16 11:45:38 +03:00
Stepan Vladovskiy
e382cc1ea5 Merge branch 'dev' into feat/sv-searching-txtai
All checks were successful
Deploy on push / deploy (push) Successful in 6s
:
2025-04-15 19:20:48 -03:00
6920351b82 schema-fix
All checks were successful
Deploy on push / deploy (push) Successful in 52s
2025-04-15 20:30:12 +03:00
eb216a5f36 draft-seo-handling
All checks were successful
Deploy on push / deploy (push) Successful in 1m10s
2025-04-15 20:16:01 +03:00
bd129efde6 update-seo-handling 2025-04-15 20:14:42 +03:00
b9f6033e66 generate seo text when draft created 2025-04-15 20:09:22 +03:00
710f522c8f schema-upgrade
All checks were successful
Deploy on push / deploy (push) Successful in 47s
2025-04-14 19:53:14 +03:00
0de4404cb1 draft-community
All checks were successful
Deploy on push / deploy (push) Successful in 47s
2025-04-14 16:02:19 +03:00
to
83d61ca76d Merge branch 'dev' into feat/sv-searching-txtai
All checks were successful
Deploy on push / deploy (push) Successful in 6s
2025-04-13 05:36:18 +00:00
1c61e889d6 update-draft-fix
All checks were successful
Deploy on push / deploy (push) Successful in 47s
2025-04-10 22:51:07 +03:00
fdedb75a2c topics-comments-stat
All checks were successful
Deploy on push / deploy (push) Successful in 45s
2025-04-10 19:14:27 +03:00
f20000f1f6 topic.stat.authors-fix2
All checks were successful
Deploy on push / deploy (push) Successful in 45s
2025-04-10 18:46:09 +03:00
7d50638b3a topic.stat.authors-fix
All checks were successful
Deploy on push / deploy (push) Successful in 1m20s
2025-04-10 18:39:31 +03:00
Stepan Vladovskiy
106222b0e0 debug: without debug logging. clean
All checks were successful
Deploy on push / deploy (push) Successful in 1m27s
2025-04-07 11:41:48 -03:00
Stepan Vladovskiy
c533241d1e fix(reader): sorting by rang not by id in cash
All checks were successful
Deploy on push / deploy (push) Successful in 6s
2025-04-03 13:51:13 -03:00
Stepan Vladovskiy
78326047bf fix(reader.py): change sorting and answer on querys
All checks were successful
Deploy on push / deploy (push) Successful in 50s
2025-04-03 13:20:18 -03:00
Stepan Vladovskiy
bc4ec79240 fix(search.py): store all results in cash not only first offset
All checks were successful
Deploy on push / deploy (push) Successful in 52s
2025-04-03 13:10:53 -03:00
Stepan Vladovskiy
a0db5707c4 feat: add cash for storing searchresalts and hold them for working pagination. Now we are have offset for use on frontend
All checks were successful
Deploy on push / deploy (push) Successful in 51s
2025-04-01 16:01:09 -03:00
Stepan Vladovskiy
ecc443c3ad refactor(reader.py): Remove the unnecessary topic joins that cause duplicate results
All checks were successful
Deploy on push / deploy (push) Successful in 51s
2025-04-01 12:57:46 -03:00
Stepan Vladovskiy
9a02ca74ad merged with dev
All checks were successful
Deploy on push / deploy (push) Successful in 1m24s
2025-03-31 13:38:32 -03:00
Stepan Vladovskiy
9ebb81cbd3 refactor(reader.py): rm debug line 2025-03-31 13:32:51 -03:00
abbc074474 updateby-fix
All checks were successful
Deploy on push / deploy (push) Successful in 54s
2025-03-31 14:39:02 +03:00
Stepan Vladovskiy
0bc55977ac debug(reader.py): query_with_stat(info) always
All checks were successful
Deploy on push / deploy (push) Successful in 51s
2025-03-27 15:18:08 -03:00
Stepan Vladovskiy
ff3a4debce debug(reader.py): trying to handle main topic ids founded
All checks were successful
Deploy on push / deploy (push) Successful in 54s
2025-03-27 14:43:17 -03:00
Stepan Vladovskiy
ae85b32f69 feat(type.qraphql): SearchResult with shout id
All checks were successful
Deploy on push / deploy (push) Successful in 51s
2025-03-27 14:06:52 -03:00
Stepan Vladovskiy
34a354e9e3 debug(reader.py: trying back shout id in query call
All checks were successful
Deploy on push / deploy (push) Successful in 52s
2025-03-27 11:54:56 -03:00
4f599e097f [0.4.17] - 2025-03-26
All checks were successful
Deploy on push / deploy (push) Successful in 54s
- Fixed `'Reaction' object is not subscriptable` error in hierarchical comments:
  - Modified `get_reactions_with_stat()` to convert Reaction objects to dictionaries
  - Added default values for limit/offset parameters
  - Fixed `load_first_replies()` implementation with proper parameter passing
  - Added doctest with example usage
  - Limited child comments to 100 per parent for performance
2025-03-26 08:54:10 +03:00
a5eaf4bb65 commented->comments_count
All checks were successful
Deploy on push / deploy (push) Successful in 55s
2025-03-26 08:25:18 +03:00
Stepan Vladovskiy
e405fb527b refactor(search.py): moved to use one table docs for embdings and docs store
All checks were successful
Deploy on push / deploy (push) Successful in 50s
2025-03-25 16:42:44 -03:00
Stepan Vladovskiy
7f36f93d92 feat(search.py): detects both missing documents and null embeddings
All checks were successful
Deploy on push / deploy (push) Successful in 1m32s
2025-03-25 15:18:29 -03:00
Stepan Vladovskiy
f089a32394 debug(search.py): with more logs when check sync of indexing
All checks were successful
Deploy on push / deploy (push) Successful in 1m3s
2025-03-25 14:44:05 -03:00
Stepan Vladovskiy
1fd623a660 feat: with index sync endpoints configs
All checks were successful
Deploy on push / deploy (push) Successful in 56s
2025-03-25 13:31:45 -03:00
Stepan Vladovskiy
88012f1b8c debug(server.py): with 4 workers (threds). cheking reindexing
All checks were successful
Deploy on push / deploy (push) Successful in 55s
2025-03-25 12:21:59 -03:00
Stepan Vladovskiy
6e284640c0 feat: give little timeout for resource stab
All checks were successful
Deploy on push / deploy (push) Successful in 51s
2025-03-24 21:42:51 -03:00
Stepan Vladovskiy
077cb46482 debug: server.py -> threds 1 , search.py -> add 3 times reconect
All checks were successful
Deploy on push / deploy (push) Successful in 49s
2025-03-24 20:16:07 -03:00
Stepan Vladovskiy
60a13a9097 refactor(search.py): moved initialization logic in search-txtai instance
All checks were successful
Deploy on push / deploy (push) Successful in 55s
2025-03-24 19:47:02 -03:00
3c56fdfaea get_topics_paginated-fix
All checks were successful
Deploy on push / deploy (push) Successful in 56s
2025-03-22 18:49:15 +03:00
81a8bf3c58 query-type-fix
All checks were successful
Deploy on push / deploy (push) Successful in 49s
2025-03-22 18:44:31 +03:00
fe9984e2d8 type-fix
All checks were successful
Deploy on push / deploy (push) Successful in 43s
2025-03-22 18:39:14 +03:00
369ff757b0 [0.4.16] - 2025-03-22
All checks were successful
Deploy on push / deploy (push) Successful in 6s
- Added hierarchical comments pagination:
  - Created new GraphQL query `load_comments_branch` for efficient loading of hierarchical comments
  - Ability to load root comments with their first N replies
  - Added pagination for both root and child comments
  - Using existing `commented` field in `Stat` type to display number of replies
  - Added special `first_replies` field to store first replies to a comment
  - Optimized SQL queries for efficient loading of comment hierarchies
  - Implemented flexible comment sorting system (by time, rating)
2025-03-22 13:37:43 +03:00
Stepan Vladovskiy
316375bf18 debug(search.py): encrease batch size for bulk indexing
All checks were successful
Deploy on push / deploy (push) Successful in 1m1s
2025-03-21 17:56:54 -03:00
Stepan Vladovskiy
fb820f67fd debug(search.py): encrease batch size for bulk indexing
All checks were successful
Deploy on push / deploy (push) Successful in 53s
2025-03-21 17:48:26 -03:00
Stepan Vladovskiy
f1d9f4e036 feat(search.py): with db reset endpoint
All checks were successful
Deploy on push / deploy (push) Successful in 53s
2025-03-21 17:28:54 -03:00
Stepan Vladovskiy
ebb67eb311 debug: decrease chars in search.py for bulk indexing
All checks were successful
Deploy on push / deploy (push) Successful in 52s
2025-03-21 16:53:00 -03:00
Stepan Vladovskiy
50a8c24ead feat(search.py): documnet for bulk indexing are categorized
All checks were successful
Deploy on push / deploy (push) Successful in 55s
2025-03-21 15:40:29 -03:00
Stepan Vladovskiy
eb4b9363ab debug: change logs entris and indexing not wraps all in documents
All checks were successful
Deploy on push / deploy (push) Successful in 53s
2025-03-21 14:32:45 -03:00
Stepan Vladovskiy
19c5028a0c debug: Limit max chars for bulk indexing
All checks were successful
Deploy on push / deploy (push) Successful in 53s
2025-03-21 14:18:32 -03:00
Stepan Vladovskiy
57e1e8e6bd debug: more logs in indexing
All checks were successful
Deploy on push / deploy (push) Successful in 53s
2025-03-21 14:10:09 -03:00
Stepan Vladovskiy
385057ffcd debug: with logs in indexing procedure
All checks were successful
Deploy on push / deploy (push) Successful in 54s
2025-03-21 13:45:50 -03:00
Stepan Vladovskiy
90699768ff debug: start index
All checks were successful
Deploy on push / deploy (push) Successful in 55s
2025-03-21 13:30:23 -03:00
Stepan Vladovskiy
ad0ca75aa9 debug: no redis for indexing in nackend side
All checks were successful
Deploy on push / deploy (push) Successful in 1m41s
2025-03-19 14:47:31 -03:00
Stepan Vladovskiy
39242d5e6c debug: add logs in search.py and change and input validation ... index ver too
All checks were successful
Deploy on push / deploy (push) Successful in 55s
2025-03-12 14:13:55 -03:00
Stepan Vladovskiy
24cca7f2cb debug: something wrong one stap back with logs
All checks were successful
Deploy on push / deploy (push) Successful in 53s
2025-03-12 13:11:19 -03:00
Stepan Vladovskiy
a9c7ac49d6 feat: with logs >>>
All checks were successful
Deploy on push / deploy (push) Successful in 59s
2025-03-12 13:07:27 -03:00
Stepan Vladovskiy
f249752db5 feat: moved txtai and search procedure in different instance
All checks were successful
Deploy on push / deploy (push) Successful in 2m18s
2025-03-12 12:06:09 -03:00
Stepan Vladovskiy
c0b2116da2 feat(db.py): added fetch_all_shouts, to populate the search index
All checks were successful
Deploy on push / deploy (push) Successful in 35s
2025-03-05 20:32:34 +00:00
Stepan Vladovskiy
59e71c8144 debug: fixed workflows gitea
All checks were successful
Deploy on push / deploy (push) Successful in 4m41s
2025-03-05 20:17:34 +00:00
Stepan Vladovskiy
e6a416383d debug: fixed workflows gitea
All checks were successful
Deploy on push / deploy (push) Successful in 15s
2025-03-05 20:16:32 +00:00
Stepan Vladovskiy
d55448398d feat(search.py): change to txtai server, with ai model. And fix granian workers 2025-03-05 20:08:21 +00:00
31 changed files with 2384 additions and 562 deletions

View File

@@ -29,7 +29,16 @@ jobs:
if: github.ref == 'refs/heads/dev' if: github.ref == 'refs/heads/dev'
uses: dokku/github-action@master uses: dokku/github-action@master
with: with:
branch: 'dev' branch: 'main'
force: true force: true
git_remote_url: 'ssh://dokku@v2.discours.io:22/core' git_remote_url: 'ssh://dokku@v2.discours.io:22/core'
ssh_private_key: ${{ secrets.SSH_PRIVATE_KEY }} ssh_private_key: ${{ secrets.SSH_PRIVATE_KEY }}
- name: Push to dokku for staging branch
if: github.ref == 'refs/heads/staging'
uses: dokku/github-action@master
with:
branch: 'dev'
git_remote_url: 'ssh://dokku@staging.discours.io:22/core'
ssh_private_key: ${{ secrets.SSH_PRIVATE_KEY }}
git_push_flags: '--force'

6
.gitignore vendored
View File

@@ -128,6 +128,9 @@ dmypy.json
.idea .idea
temp.* temp.*
# Debug
DEBUG.log
discours.key discours.key
discours.crt discours.crt
discours.pem discours.pem
@@ -161,4 +164,5 @@ views.json
*.key *.key
*.crt *.crt
*cache.json *cache.json
.cursor .cursor
.devcontainer/

View File

@@ -1,3 +1,44 @@
#### [0.4.20] - 2025-05-03
- Исправлена ошибка в классе `CacheRevalidationManager`: добавлена инициализация атрибута `_redis`
- Улучшена обработка соединения с Redis в менеджере ревалидации кэша:
- Автоматическое восстановление соединения в случае его потери
- Проверка соединения перед выполнением операций с кэшем
- Дополнительное логирование для упрощения диагностики проблем
- Исправлен резолвер `unpublish_shout`:
- Корректное формирование синтетического поля `publication` с `published_at: null`
- Возвращение полноценного словаря с данными вместо объекта модели
- Улучшена загрузка связанных данных (авторы, темы) для правильного формирования ответа
#### [0.4.19] - 2025-04-14
- dropped `Shout.description` and `Draft.description` to be UX-generated
- use redis to init views counters after migrator
#### [0.4.18] - 2025-04-10
- Fixed `Topic.stat.authors` and `Topic.stat.comments`
- Fixed unique constraint violation for empty slug values:
- Modified `update_draft` resolver to handle empty slug values
- Modified `create_draft` resolver to prevent empty slug values
- Added validation to prevent inserting or updating drafts with empty slug
- Fixed database error "duplicate key value violates unique constraint draft_slug_key"
#### [0.4.17] - 2025-03-26
- Fixed `'Reaction' object is not subscriptable` error in hierarchical comments:
- Modified `get_reactions_with_stat()` to convert Reaction objects to dictionaries
- Added default values for limit/offset parameters
- Fixed `load_first_replies()` implementation with proper parameter passing
- Added doctest with example usage
- Limited child comments to 100 per parent for performance
#### [0.4.16] - 2025-03-22
- Added hierarchical comments pagination:
- Created new GraphQL query `load_comments_branch` for efficient loading of hierarchical comments
- Ability to load root comments with their first N replies
- Added pagination for both root and child comments
- Using existing `comments_count` field in `Stat` type to display number of replies
- Added special `first_replies` field to store first replies to a comment
- Optimized SQL queries for efficient loading of comment hierarchies
- Implemented flexible comment sorting system (by time, rating)
#### [0.4.15] - 2025-03-22 #### [0.4.15] - 2025-03-22
- Upgraded caching system described `docs/caching.md` - Upgraded caching system described `docs/caching.md`
- Module `cache/memorycache.py` removed - Module `cache/memorycache.py` removed
@@ -31,8 +72,7 @@
- Implemented persistent Redis caching for author queries without TTL (invalidated only on changes) - Implemented persistent Redis caching for author queries without TTL (invalidated only on changes)
- Optimized author retrieval with separate endpoints: - Optimized author retrieval with separate endpoints:
- `get_authors_all` - returns all non-deleted authors without statistics - `get_authors_all` - returns all non-deleted authors without statistics
- `get_authors_paginated` - returns authors with statistics and pagination support - `load_authors_by` - optimized to use caching and efficient sorting and pagination
- `load_authors_by` - optimized to use caching and efficient sorting
- Improved SQL queries with optimized JOIN conditions and efficient filtering - Improved SQL queries with optimized JOIN conditions and efficient filtering
- Added pre-aggregation of statistics (shouts count, followers count) in single efficient queries - Added pre-aggregation of statistics (shouts count, followers count) in single efficient queries
- Implemented robust cache invalidation on author updates - Implemented robust cache invalidation on author updates
@@ -44,7 +84,6 @@
- Implemented persistent Redis caching for topic queries (no TTL, invalidated only on changes) - Implemented persistent Redis caching for topic queries (no TTL, invalidated only on changes)
- Optimized topic retrieval with separate endpoints for different use cases: - Optimized topic retrieval with separate endpoints for different use cases:
- `get_topics_all` - returns all topics without statistics for lightweight listing - `get_topics_all` - returns all topics without statistics for lightweight listing
- `get_topics_paginated` - returns topics with statistics and pagination support
- `get_topics_by_community` - adds pagination and optimized filtering by community - `get_topics_by_community` - adds pagination and optimized filtering by community
- Added SQLAlchemy-managed indexes directly in ORM models for automatic schema maintenance - Added SQLAlchemy-managed indexes directly in ORM models for automatic schema maintenance
- Created `sync_indexes()` function for automatic index synchronization during app startup - Created `sync_indexes()` function for automatic index synchronization during app startup
@@ -142,7 +181,7 @@
#### [0.4.4] #### [0.4.4]
- `followers_stat` removed for shout - `followers_stat` removed for shout
- sqlite3 support added - sqlite3 support added
- `rating_stat` and `commented_stat` fixes - `rating_stat` and `comments_count` fixes
#### [0.4.3] #### [0.4.3]
- cache reimplemented - cache reimplemented

View File

@@ -3,6 +3,7 @@ FROM python:slim
RUN apt-get update && apt-get install -y \ RUN apt-get update && apt-get install -y \
postgresql-client \ postgresql-client \
curl \ curl \
build-essential \
&& rm -rf /var/lib/apt/lists/* && rm -rf /var/lib/apt/lists/*
WORKDIR /app WORKDIR /app

1
app/resolvers/draft.py Normal file
View File

@@ -0,0 +1 @@

5
cache/cache.py vendored
View File

@@ -545,8 +545,9 @@ async def get_cached_data(key: str) -> Optional[Any]:
try: try:
cached_data = await redis.execute("GET", key) cached_data = await redis.execute("GET", key)
if cached_data: if cached_data:
logger.debug(f"Данные получены из кеша по ключу {key}") loaded = orjson.loads(cached_data)
return orjson.loads(cached_data) logger.debug(f"Данные получены из кеша по ключу {key}: {len(loaded)}")
return loaded
return None return None
except Exception as e: except Exception as e:
logger.error(f"Ошибка при получении данных из кеша: {e}") logger.error(f"Ошибка при получении данных из кеша: {e}")

15
cache/revalidator.py vendored
View File

@@ -8,6 +8,7 @@ from cache.cache import (
invalidate_cache_by_prefix, invalidate_cache_by_prefix,
) )
from resolvers.stat import get_with_stat from resolvers.stat import get_with_stat
from services.redis import redis
from utils.logger import root_logger as logger from utils.logger import root_logger as logger
CACHE_REVALIDATION_INTERVAL = 300 # 5 minutes CACHE_REVALIDATION_INTERVAL = 300 # 5 minutes
@@ -21,9 +22,19 @@ class CacheRevalidationManager:
self.lock = asyncio.Lock() self.lock = asyncio.Lock()
self.running = True self.running = True
self.MAX_BATCH_SIZE = 10 # Максимальное количество элементов для поштучной обработки self.MAX_BATCH_SIZE = 10 # Максимальное количество элементов для поштучной обработки
self._redis = redis # Добавлена инициализация _redis для доступа к Redis-клиенту
async def start(self): async def start(self):
"""Запуск фонового воркера для ревалидации кэша.""" """Запуск фонового воркера для ревалидации кэша."""
# Проверяем, что у нас есть соединение с Redis
if not self._redis._client:
logger.warning("Redis connection not established. Waiting for connection...")
try:
await self._redis.connect()
logger.info("Redis connection established for revalidation manager")
except Exception as e:
logger.error(f"Failed to connect to Redis: {e}")
self.task = asyncio.create_task(self.revalidate_cache()) self.task = asyncio.create_task(self.revalidate_cache())
async def revalidate_cache(self): async def revalidate_cache(self):
@@ -39,6 +50,10 @@ class CacheRevalidationManager:
async def process_revalidation(self): async def process_revalidation(self):
"""Обновление кэша для всех сущностей, требующих ревалидации.""" """Обновление кэша для всех сущностей, требующих ревалидации."""
# Проверяем соединение с Redis
if not self._redis._client:
return # Выходим из метода, если не удалось подключиться
async with self.lock: async with self.lock:
# Ревалидация кэша авторов # Ревалидация кэша авторов
if self.items_to_revalidate["authors"]: if self.items_to_revalidate["authors"]:

View File

@@ -147,16 +147,32 @@ await invalidate_topics_cache(456)
```python ```python
class CacheRevalidationManager: class CacheRevalidationManager:
# ... def __init__(self, interval=CACHE_REVALIDATION_INTERVAL):
async def process_revalidation(self):
# ... # ...
self._redis = redis # Прямая ссылка на сервис Redis
async def start(self):
# Проверка и установка соединения с Redis
# ...
async def process_revalidation(self):
# Обработка элементов для ревалидации
# ...
def mark_for_revalidation(self, entity_id, entity_type): def mark_for_revalidation(self, entity_id, entity_type):
# Добавляет сущность в очередь на ревалидацию
# ... # ...
``` ```
Менеджер ревалидации работает как асинхронный фоновый процесс, который периодически (по умолчанию каждые 5 минут) проверяет наличие сущностей для ревалидации. Менеджер ревалидации работает как асинхронный фоновый процесс, который периодически (по умолчанию каждые 5 минут) проверяет наличие сущностей для ревалидации.
Особенности реализации: **Взаимодействие с Redis:**
- CacheRevalidationManager хранит прямую ссылку на сервис Redis через атрибут `_redis`
- При запуске проверяется наличие соединения с Redis и при необходимости устанавливается новое
- Включена автоматическая проверка соединения перед каждой операцией ревалидации
- Система самостоятельно восстанавливает соединение при его потере
**Особенности реализации:**
- Для авторов и тем используется поштучная ревалидация каждой записи - Для авторов и тем используется поштучная ревалидация каждой записи
- Для шаутов и реакций используется батчевая обработка, с порогом в 10 элементов - Для шаутов и реакций используется батчевая обработка, с порогом в 10 элементов
- При достижении порога система переключается на инвалидацию коллекций вместо поштучной обработки - При достижении порога система переключается на инвалидацию коллекций вместо поштучной обработки

165
docs/comments-pagination.md Normal file
View File

@@ -0,0 +1,165 @@
# Пагинация комментариев
## Обзор
Реализована система пагинации комментариев по веткам, которая позволяет эффективно загружать и отображать вложенные ветки обсуждений. Основные преимущества:
1. Загрузка только необходимых комментариев, а не всего дерева
2. Снижение нагрузки на сервер и клиент
3. Возможность эффективной навигации по большим обсуждениям
4. Предзагрузка первых N ответов для улучшения UX
## API для иерархической загрузки комментариев
### GraphQL запрос `load_comments_branch`
```graphql
query LoadCommentsBranch(
$shout: Int!,
$parentId: Int,
$limit: Int,
$offset: Int,
$sort: ReactionSort,
$childrenLimit: Int,
$childrenOffset: Int
) {
load_comments_branch(
shout: $shout,
parent_id: $parentId,
limit: $limit,
offset: $offset,
sort: $sort,
children_limit: $childrenLimit,
children_offset: $childrenOffset
) {
id
body
created_at
created_by {
id
name
slug
pic
}
kind
reply_to
stat {
rating
comments_count
}
first_replies {
id
body
created_at
created_by {
id
name
slug
pic
}
kind
reply_to
stat {
rating
comments_count
}
}
}
}
```
### Параметры запроса
| Параметр | Тип | По умолчанию | Описание |
|----------|-----|--------------|----------|
| shout | Int! | - | ID статьи, к которой относятся комментарии |
| parent_id | Int | null | ID родительского комментария. Если null, загружаются корневые комментарии |
| limit | Int | 10 | Максимальное количество комментариев для загрузки |
| offset | Int | 0 | Смещение для пагинации |
| sort | ReactionSort | newest | Порядок сортировки: newest, oldest, like |
| children_limit | Int | 3 | Максимальное количество дочерних комментариев для каждого родительского |
| children_offset | Int | 0 | Смещение для пагинации дочерних комментариев |
### Поля в ответе
Каждый комментарий содержит следующие основные поля:
- `id`: ID комментария
- `body`: Текст комментария
- `created_at`: Время создания
- `created_by`: Информация об авторе
- `kind`: Тип реакции (COMMENT)
- `reply_to`: ID родительского комментария (null для корневых)
- `first_replies`: Первые N дочерних комментариев
- `stat`: Статистика комментария, включающая:
- `comments_count`: Количество ответов на комментарий
- `rating`: Рейтинг комментария
## Примеры использования
### Загрузка корневых комментариев с первыми ответами
```javascript
const { data } = await client.query({
query: LOAD_COMMENTS_BRANCH,
variables: {
shout: 222,
limit: 10,
offset: 0,
sort: "newest",
childrenLimit: 3
}
});
```
### Загрузка ответов на конкретный комментарий
```javascript
const { data } = await client.query({
query: LOAD_COMMENTS_BRANCH,
variables: {
shout: 222,
parentId: 123, // ID комментария, для которого загружаем ответы
limit: 10,
offset: 0,
sort: "oldest" // Сортируем ответы от старых к новым
}
});
```
### Пагинация дочерних комментариев
Для загрузки дополнительных ответов на комментарий:
```javascript
const { data } = await client.query({
query: LOAD_COMMENTS_BRANCH,
variables: {
shout: 222,
parentId: 123,
limit: 10,
offset: 0,
childrenLimit: 5,
childrenOffset: 3 // Пропускаем первые 3 комментария (уже загруженные)
}
});
```
## Рекомендации по клиентской реализации
1. Для эффективной работы со сложными ветками обсуждений рекомендуется:
- Сначала загружать только корневые комментарии с первыми N ответами
- При наличии дополнительных ответов (когда `stat.comments_count > first_replies.length`)
добавить кнопку "Показать все ответы"
- При нажатии на кнопку загружать дополнительные ответы с помощью запроса с указанным `parentId`
2. Для сортировки:
- По умолчанию использовать `newest` для отображения свежих обсуждений
- Предусмотреть переключатель сортировки для всего дерева комментариев
- При изменении сортировки перезагружать данные с новым параметром `sort`
3. Для улучшения производительности:
- Кешировать результаты запросов на клиенте
- Использовать оптимистичные обновления при добавлении/редактировании комментариев
- При необходимости загружать комментарии порциями (ленивая загрузка)

View File

@@ -34,4 +34,15 @@
- Поддерживаемые методы: GET, POST, OPTIONS - Поддерживаемые методы: GET, POST, OPTIONS
- Настроена поддержка credentials - Настроена поддержка credentials
- Разрешенные заголовки: Authorization, Content-Type, X-Requested-With, DNT, Cache-Control - Разрешенные заголовки: Authorization, Content-Type, X-Requested-With, DNT, Cache-Control
- Настроено кэширование preflight-ответов на 20 дней (1728000 секунд) - Настроено кэширование preflight-ответов на 20 дней (1728000 секунд)
## Пагинация комментариев по веткам
- Эффективная загрузка комментариев с учетом их иерархической структуры
- Отдельный запрос `load_comments_branch` для оптимизированной загрузки ветки комментариев
- Возможность загрузки корневых комментариев статьи с первыми ответами на них
- Гибкая пагинация как для корневых, так и для дочерних комментариев
- Использование поля `stat.comments_count` для отображения количества ответов на комментарий
- Добавление специального поля `first_replies` для хранения первых ответов на комментарий
- Поддержка различных методов сортировки (новые, старые, популярные)
- Оптимизированные SQL запросы для минимизации нагрузки на базу данных

48
main.py
View File

@@ -17,7 +17,8 @@ from cache.revalidator import revalidation_manager
from services.exception import ExceptionHandlerMiddleware from services.exception import ExceptionHandlerMiddleware
from services.redis import redis from services.redis import redis
from services.schema import create_all_tables, resolvers from services.schema import create_all_tables, resolvers
from services.search import search_service #from services.search import search_service
from services.search import search_service, initialize_search_index
from services.viewed import ViewedStorage from services.viewed import ViewedStorage
from services.webhook import WebhookEndpoint, create_webhook_endpoint from services.webhook import WebhookEndpoint, create_webhook_endpoint
from settings import DEV_SERVER_PID_FILE_NAME, MODE from settings import DEV_SERVER_PID_FILE_NAME, MODE
@@ -34,24 +35,67 @@ async def start():
f.write(str(os.getpid())) f.write(str(os.getpid()))
print(f"[main] process started in {MODE} mode") print(f"[main] process started in {MODE} mode")
async def check_search_service():
"""Check if search service is available and log result"""
info = await search_service.info()
if info.get("status") in ["error", "unavailable"]:
print(f"[WARNING] Search service unavailable: {info.get('message', 'unknown reason')}")
else:
print(f"[INFO] Search service is available: {info}")
# indexing DB data
# async def indexing():
# from services.db import fetch_all_shouts
# all_shouts = await fetch_all_shouts()
# await initialize_search_index(all_shouts)
async def lifespan(_app): async def lifespan(_app):
try: try:
print("[lifespan] Starting application initialization")
create_all_tables() create_all_tables()
await asyncio.gather( await asyncio.gather(
redis.connect(), redis.connect(),
precache_data(), precache_data(),
ViewedStorage.init(), ViewedStorage.init(),
create_webhook_endpoint(), create_webhook_endpoint(),
search_service.info(), check_search_service(),
start(), start(),
revalidation_manager.start(), revalidation_manager.start(),
) )
print("[lifespan] Basic initialization complete")
# Add a delay before starting the intensive search indexing
print("[lifespan] Waiting for system stabilization before search indexing...")
await asyncio.sleep(10) # 10-second delay to let the system stabilize
# Start search indexing as a background task with lower priority
asyncio.create_task(initialize_search_index_background())
yield yield
finally: finally:
print("[lifespan] Shutting down application services")
tasks = [redis.disconnect(), ViewedStorage.stop(), revalidation_manager.stop()] tasks = [redis.disconnect(), ViewedStorage.stop(), revalidation_manager.stop()]
await asyncio.gather(*tasks, return_exceptions=True) await asyncio.gather(*tasks, return_exceptions=True)
print("[lifespan] Shutdown complete")
# Initialize search index in the background
async def initialize_search_index_background():
"""Run search indexing as a background task with low priority"""
try:
print("[search] Starting background search indexing process")
from services.db import fetch_all_shouts
# Get total count first (optional)
all_shouts = await fetch_all_shouts()
total_count = len(all_shouts) if all_shouts else 0
print(f"[search] Fetched {total_count} shouts for background indexing")
# Start the indexing process with the fetched shouts
print("[search] Beginning background search index initialization...")
await initialize_search_index(all_shouts)
print("[search] Background search index initialization complete")
except Exception as e:
print(f"[search] Error in background search indexing: {str(e)}")
# Создаем экземпляр GraphQL # Создаем экземпляр GraphQL
graphql_app = GraphQL(schema, debug=True) graphql_app = GraphQL(schema, debug=True)

View File

@@ -6,6 +6,7 @@ from sqlalchemy.orm import relationship
from orm.author import Author from orm.author import Author
from orm.topic import Topic from orm.topic import Topic
from services.db import Base from services.db import Base
from orm.shout import Shout
class DraftTopic(Base): class DraftTopic(Base):
@@ -26,11 +27,14 @@ class DraftAuthor(Base):
caption = Column(String, nullable=True, default="") caption = Column(String, nullable=True, default="")
class Draft(Base): class Draft(Base):
__tablename__ = "draft" __tablename__ = "draft"
# required # required
created_at: int = Column(Integer, nullable=False, default=lambda: int(time.time())) created_at: int = Column(Integer, nullable=False, default=lambda: int(time.time()))
created_by: int = Column(ForeignKey("author.id"), nullable=False) # Колонки для связей с автором
created_by: int = Column("created_by", ForeignKey("author.id"), nullable=False)
community: int = Column("community", ForeignKey("community.id"), nullable=False, default=1)
# optional # optional
layout: str = Column(String, nullable=True, default="article") layout: str = Column(String, nullable=True, default="article")
@@ -38,7 +42,6 @@ class Draft(Base):
title: str = Column(String, nullable=True) title: str = Column(String, nullable=True)
subtitle: str | None = Column(String, nullable=True) subtitle: str | None = Column(String, nullable=True)
lead: str | None = Column(String, nullable=True) lead: str | None = Column(String, nullable=True)
description: str | None = Column(String, nullable=True)
body: str = Column(String, nullable=False, comment="Body") body: str = Column(String, nullable=False, comment="Body")
media: dict | None = Column(JSON, nullable=True) media: dict | None = Column(JSON, nullable=True)
cover: str | None = Column(String, nullable=True, comment="Cover image url") cover: str | None = Column(String, nullable=True, comment="Cover image url")
@@ -49,7 +52,55 @@ class Draft(Base):
# auto # auto
updated_at: int | None = Column(Integer, nullable=True, index=True) updated_at: int | None = Column(Integer, nullable=True, index=True)
deleted_at: int | None = Column(Integer, nullable=True, index=True) deleted_at: int | None = Column(Integer, nullable=True, index=True)
updated_by: int | None = Column(ForeignKey("author.id"), nullable=True) updated_by: int | None = Column("updated_by", ForeignKey("author.id"), nullable=True)
deleted_by: int | None = Column(ForeignKey("author.id"), nullable=True) deleted_by: int | None = Column("deleted_by", ForeignKey("author.id"), nullable=True)
authors = relationship(Author, secondary="draft_author")
topics = relationship(Topic, secondary="draft_topic") # --- Relationships ---
# Только many-to-many связи через вспомогательные таблицы
authors = relationship(Author, secondary="draft_author", lazy="select")
topics = relationship(Topic, secondary="draft_topic", lazy="select")
# Связь с Community (если нужна как объект, а не ID)
# community = relationship("Community", foreign_keys=[community_id], lazy="joined")
# Пока оставляем community_id как ID
# Связь с публикацией (один-к-одному или один-к-нулю)
# Загружается через joinedload в резолвере
publication = relationship(
"Shout",
primaryjoin="Draft.id == Shout.draft",
foreign_keys="Shout.draft",
uselist=False,
lazy="noload", # Не грузим по умолчанию, только через options
viewonly=True # Указываем, что это связь только для чтения
)
def dict(self):
"""
Сериализует объект Draft в словарь.
Гарантирует, что поля topics и authors всегда будут списками.
"""
return {
"id": self.id,
"created_at": self.created_at,
"created_by": self.created_by,
"community": self.community,
"layout": self.layout,
"slug": self.slug,
"title": self.title,
"subtitle": self.subtitle,
"lead": self.lead,
"body": self.body,
"media": self.media or [],
"cover": self.cover,
"cover_caption": self.cover_caption,
"lang": self.lang,
"seo": self.seo,
"updated_at": self.updated_at,
"deleted_at": self.deleted_at,
"updated_by": self.updated_by,
"deleted_by": self.deleted_by,
# Гарантируем, что topics и authors всегда будут списками
"topics": [topic.dict() for topic in (self.topics or [])],
"authors": [author.dict() for author in (self.authors or [])]
}

View File

@@ -71,6 +71,34 @@ class ShoutAuthor(Base):
class Shout(Base): class Shout(Base):
""" """
Публикация в системе. Публикация в системе.
Attributes:
body (str)
slug (str)
cover (str) : "Cover image url"
cover_caption (str) : "Cover image alt caption"
lead (str)
title (str)
subtitle (str)
layout (str)
media (dict)
authors (list[Author])
topics (list[Topic])
reactions (list[Reaction])
lang (str)
version_of (int)
oid (str)
seo (str) : JSON
draft (int)
created_at (int)
updated_at (int)
published_at (int)
featured_at (int)
deleted_at (int)
created_by (int)
updated_by (int)
deleted_by (int)
community (int)
""" """
__tablename__ = "shout" __tablename__ = "shout"
@@ -91,7 +119,6 @@ class Shout(Base):
cover: str | None = Column(String, nullable=True, comment="Cover image url") cover: str | None = Column(String, nullable=True, comment="Cover image url")
cover_caption: str | None = Column(String, nullable=True, comment="Cover image alt caption") cover_caption: str | None = Column(String, nullable=True, comment="Cover image alt caption")
lead: str | None = Column(String, nullable=True) lead: str | None = Column(String, nullable=True)
description: str | None = Column(String, nullable=True)
title: str = Column(String, nullable=False) title: str = Column(String, nullable=False)
subtitle: str | None = Column(String, nullable=True) subtitle: str | None = Column(String, nullable=True)
layout: str = Column(String, nullable=False, default="article") layout: str = Column(String, nullable=False, default="article")

View File

@@ -13,5 +13,10 @@ starlette
gql gql
ariadne ariadne
granian granian
# NLP and search
httpx
orjson orjson
pydantic pydantic
trafilatura

View File

@@ -8,6 +8,7 @@ from resolvers.author import ( # search_authors,
get_author_id, get_author_id,
get_authors_all, get_authors_all,
load_authors_by, load_authors_by,
load_authors_search,
update_author, update_author,
) )
from resolvers.community import get_communities_all, get_community from resolvers.community import get_communities_all, get_community
@@ -16,9 +17,11 @@ from resolvers.draft import (
delete_draft, delete_draft,
load_drafts, load_drafts,
publish_draft, publish_draft,
unpublish_draft,
update_draft, update_draft,
) )
from resolvers.editor import (
unpublish_shout,
)
from resolvers.feed import ( from resolvers.feed import (
load_shouts_coauthored, load_shouts_coauthored,
load_shouts_discussed, load_shouts_discussed,
@@ -37,6 +40,7 @@ from resolvers.reaction import (
create_reaction, create_reaction,
delete_reaction, delete_reaction,
load_comment_ratings, load_comment_ratings,
load_comments_branch,
load_reactions_by, load_reactions_by,
load_shout_comments, load_shout_comments,
load_shout_ratings, load_shout_ratings,
@@ -70,6 +74,7 @@ __all__ = [
"get_author_follows_authors", "get_author_follows_authors",
"get_authors_all", "get_authors_all",
"load_authors_by", "load_authors_by",
"load_authors_search",
"update_author", "update_author",
## "search_authors", ## "search_authors",
# community # community
@@ -107,6 +112,7 @@ __all__ = [
"load_shout_comments", "load_shout_comments",
"load_shout_ratings", "load_shout_ratings",
"load_comment_ratings", "load_comment_ratings",
"load_comments_branch",
# notifier # notifier
"load_notifications", "load_notifications",
"notifications_seen_thread", "notifications_seen_thread",

View File

@@ -20,6 +20,7 @@ from services.auth import login_required
from services.db import local_session from services.db import local_session
from services.redis import redis from services.redis import redis
from services.schema import mutation, query from services.schema import mutation, query
from services.search import search_service
from utils.logger import root_logger as logger from utils.logger import root_logger as logger
DEFAULT_COMMUNITIES = [1] DEFAULT_COMMUNITIES = [1]
@@ -232,22 +233,6 @@ async def get_authors_all(_, _info):
return await get_all_authors() return await get_all_authors()
@query.field("get_authors_paginated")
async def get_authors_paginated(_, _info, limit=50, offset=0, by=None):
"""
Получает список авторов с пагинацией и статистикой.
Args:
limit: Максимальное количество возвращаемых авторов
offset: Смещение для пагинации
by: Параметр сортировки (new/active)
Returns:
list: Список авторов с их статистикой
"""
return await get_authors_with_stats(limit, offset, by)
@query.field("get_author") @query.field("get_author")
async def get_author(_, _info, slug="", author_id=0): async def get_author(_, _info, slug="", author_id=0):
author_dict = None author_dict = None
@@ -317,6 +302,46 @@ async def load_authors_by(_, _info, by, limit, offset):
return await get_authors_with_stats(limit, offset, by) return await get_authors_with_stats(limit, offset, by)
@query.field("load_authors_search")
async def load_authors_search(_, info, text: str, limit: int = 10, offset: int = 0):
"""
Resolver for searching authors by text. Works with txt-ai search endpony.
Args:
text: Search text
limit: Maximum number of authors to return
offset: Offset for pagination
Returns:
list: List of authors matching the search criteria
"""
# Get author IDs from search engine (already sorted by relevance)
search_results = await search_service.search_authors(text, limit, offset)
if not search_results:
return []
author_ids = [result.get("id") for result in search_results if result.get("id")]
if not author_ids:
return []
# Fetch full author objects from DB
with local_session() as session:
# Simple query to get authors by IDs - no need for stats here
authors_query = select(Author).filter(Author.id.in_(author_ids))
db_authors = session.execute(authors_query).scalars().all()
if not db_authors:
return []
# Create a dictionary for quick lookup
authors_dict = {str(author.id): author for author in db_authors}
# Keep the order from search results (maintains the relevance sorting)
ordered_authors = [authors_dict[author_id] for author_id in author_ids if author_id in authors_dict]
return ordered_authors
def get_author_id_from(slug="", user=None, author_id=None): def get_author_id_from(slug="", user=None, author_id=None):
try: try:
author_id = None author_id = None

View File

@@ -1,7 +1,6 @@
import time import time
from operator import or_ import trafilatura
from sqlalchemy.orm import joinedload
from sqlalchemy.sql import and_
from cache.cache import ( from cache.cache import (
cache_author, cache_author,
@@ -11,7 +10,7 @@ from cache.cache import (
invalidate_shouts_cache, invalidate_shouts_cache,
) )
from orm.author import Author from orm.author import Author
from orm.draft import Draft from orm.draft import Draft, DraftAuthor, DraftTopic
from orm.shout import Shout, ShoutAuthor, ShoutTopic from orm.shout import Shout, ShoutAuthor, ShoutTopic
from orm.topic import Topic from orm.topic import Topic
from services.auth import login_required from services.auth import login_required
@@ -19,35 +18,70 @@ from services.db import local_session
from services.notify import notify_shout from services.notify import notify_shout
from services.schema import mutation, query from services.schema import mutation, query
from services.search import search_service from services.search import search_service
from utils.html_wrapper import wrap_html_fragment
from utils.logger import root_logger as logger from utils.logger import root_logger as logger
def create_shout_from_draft(session, draft, author_id): def create_shout_from_draft(session, draft, author_id):
"""
Создаёт новый объект публикации (Shout) на основе черновика.
Args:
session: SQLAlchemy сессия (не используется, для совместимости)
draft (Draft): Объект черновика
author_id (int): ID автора публикации
Returns:
Shout: Новый объект публикации (не сохранённый в базе)
Пример:
>>> from orm.draft import Draft
>>> draft = Draft(id=1, title='Заголовок', body='Текст', slug='slug', created_by=1)
>>> shout = create_shout_from_draft(None, draft, 1)
>>> shout.title
'Заголовок'
>>> shout.body
'Текст'
>>> shout.created_by
1
"""
# Создаем новую публикацию # Создаем новую публикацию
shout = Shout( shout = Shout(
body=draft.body, body=draft.body or "",
slug=draft.slug, slug=draft.slug,
cover=draft.cover, cover=draft.cover,
cover_caption=draft.cover_caption, cover_caption=draft.cover_caption,
lead=draft.lead, lead=draft.lead,
description=draft.description, title=draft.title or "",
title=draft.title,
subtitle=draft.subtitle, subtitle=draft.subtitle,
layout=draft.layout, layout=draft.layout or "article",
media=draft.media, media=draft.media or [],
lang=draft.lang, lang=draft.lang or "ru",
seo=draft.seo, seo=draft.seo,
created_by=author_id, created_by=author_id,
community=draft.community, community=draft.community,
draft=draft.id, draft=draft.id,
deleted_at=None, deleted_at=None,
) )
# Инициализируем пустые массивы для связей
shout.topics = []
shout.authors = []
return shout return shout
@query.field("load_drafts") @query.field("load_drafts")
@login_required @login_required
async def load_drafts(_, info): async def load_drafts(_, info):
"""
Загружает все черновики, доступные текущему пользователю.
Предварительно загружает связанные объекты (topics, authors, publication),
чтобы избежать ошибок с отсоединенными объектами при сериализации.
Returns:
dict: Список черновиков или сообщение об ошибке
"""
user_id = info.context.get("user_id") user_id = info.context.get("user_id")
author_dict = info.context.get("author", {}) author_dict = info.context.get("author", {})
author_id = author_dict.get("id") author_id = author_dict.get("id")
@@ -55,13 +89,44 @@ async def load_drafts(_, info):
if not user_id or not author_id: if not user_id or not author_id:
return {"error": "User ID and author ID are required"} return {"error": "User ID and author ID are required"}
with local_session() as session: try:
drafts = ( with local_session() as session:
session.query(Draft) # Предзагружаем authors, topics и связанную publication
.filter(or_(Draft.authors.any(Author.id == author_id), Draft.created_by == author_id)) drafts_query = (
.all() session.query(Draft)
) .options(
return {"drafts": drafts} joinedload(Draft.topics),
joinedload(Draft.authors),
joinedload(Draft.publication) # Загружаем связанную публикацию
)
.filter(Draft.authors.any(Author.id == author_id))
)
drafts = drafts_query.all()
# Преобразуем объекты в словари, пока они в контексте сессии
drafts_data = []
for draft in drafts:
draft_dict = draft.dict()
# Всегда возвращаем массив для topics, даже если он пустой
draft_dict["topics"] = [topic.dict() for topic in (draft.topics or [])]
draft_dict["authors"] = [author.dict() for author in (draft.authors or [])]
# Добавляем информацию о публикации, если она есть
if draft.publication:
draft_dict["publication"] = {
"id": draft.publication.id,
"slug": draft.publication.slug,
"published_at": draft.publication.published_at
}
else:
draft_dict["publication"] = None
drafts_data.append(draft_dict)
return {"drafts": drafts_data}
except Exception as e:
logger.error(f"Failed to load drafts: {e}", exc_info=True)
return {"error": f"Failed to load drafts: {str(e)}"}
@mutation.field("create_draft") @mutation.field("create_draft")
@@ -105,23 +170,40 @@ async def create_draft(_, info, draft_input):
if "title" not in draft_input or not draft_input["title"]: if "title" not in draft_input or not draft_input["title"]:
draft_input["title"] = "" # Пустая строка вместо NULL draft_input["title"] = "" # Пустая строка вместо NULL
# Проверяем slug - он должен быть или не пустым, или не передаваться вообще
if "slug" in draft_input and (draft_input["slug"] is None or draft_input["slug"] == ""):
# При создании черновика удаляем пустой slug из входных данных
del draft_input["slug"]
try: try:
with local_session() as session: with local_session() as session:
# Remove id from input if present since it's auto-generated # Remove id from input if present since it's auto-generated
if "id" in draft_input: if "id" in draft_input:
del draft_input["id"] del draft_input["id"]
# Добавляем текущее время создания # Добавляем текущее время создания и ID автора
draft_input["created_at"] = int(time.time()) draft_input["created_at"] = int(time.time())
draft_input["created_by"] = author_id
draft = Draft(created_by=author_id, **draft_input) draft = Draft(**draft_input)
session.add(draft) session.add(draft)
session.flush()
# Добавляем создателя как автора
da = DraftAuthor(shout=draft.id, author=author_id)
session.add(da)
session.commit() session.commit()
return {"draft": draft} return {"draft": draft}
except Exception as e: except Exception as e:
logger.error(f"Failed to create draft: {e}", exc_info=True) logger.error(f"Failed to create draft: {e}", exc_info=True)
return {"error": f"Failed to create draft: {str(e)}"} return {"error": f"Failed to create draft: {str(e)}"}
def generate_teaser(body, limit=300):
body_html = wrap_html_fragment(body)
body_text = trafilatura.extract(body_html, include_comments=False, include_tables=False)
body_teaser = ". ".join(body_text[:limit].split(". ")[:-1])
return body_teaser
@mutation.field("update_draft") @mutation.field("update_draft")
@login_required @login_required
@@ -130,7 +212,21 @@ async def update_draft(_, info, draft_id: int, draft_input):
Args: Args:
draft_id: ID черновика для обновления draft_id: ID черновика для обновления
draft_input: Данные для обновления черновика draft_input: Данные для обновления черновика согласно схеме DraftInput:
- layout: String
- author_ids: [Int!]
- topic_ids: [Int!]
- main_topic_id: Int
- media: [MediaItemInput]
- lead: String
- subtitle: String
- lang: String
- seo: String
- body: String
- title: String
- slug: String
- cover: String
- cover_caption: String
Returns: Returns:
dict: Обновленный черновик или сообщение об ошибке dict: Обновленный черновик или сообщение об ошибке
@@ -142,15 +238,89 @@ async def update_draft(_, info, draft_id: int, draft_input):
if not user_id or not author_id: if not user_id or not author_id:
return {"error": "Author ID are required"} return {"error": "Author ID are required"}
with local_session() as session: try:
draft = session.query(Draft).filter(Draft.id == draft_id).first() with local_session() as session:
if not draft: draft = session.query(Draft).filter(Draft.id == draft_id).first()
return {"error": "Draft not found"} if not draft:
return {"error": "Draft not found"}
Draft.update(draft, draft_input) # Фильтруем входные данные, оставляя только разрешенные поля
draft.updated_at = int(time.time()) allowed_fields = {
session.commit() "layout", "author_ids", "topic_ids", "main_topic_id",
return {"draft": draft} "media", "lead", "subtitle", "lang", "seo", "body",
"title", "slug", "cover", "cover_caption"
}
filtered_input = {k: v for k, v in draft_input.items() if k in allowed_fields}
# Проверяем slug
if "slug" in filtered_input and not filtered_input["slug"]:
del filtered_input["slug"]
# Обновляем связи с авторами если переданы
if "author_ids" in filtered_input:
author_ids = filtered_input.pop("author_ids")
if author_ids:
# Очищаем текущие связи
session.query(DraftAuthor).filter(DraftAuthor.shout == draft_id).delete()
# Добавляем новые связи
for aid in author_ids:
da = DraftAuthor(shout=draft_id, author=aid)
session.add(da)
# Обновляем связи с темами если переданы
if "topic_ids" in filtered_input:
topic_ids = filtered_input.pop("topic_ids")
main_topic_id = filtered_input.pop("main_topic_id", None)
if topic_ids:
# Очищаем текущие связи
session.query(DraftTopic).filter(DraftTopic.shout == draft_id).delete()
# Добавляем новые связи
for tid in topic_ids:
dt = DraftTopic(
shout=draft_id,
topic=tid,
main=(tid == main_topic_id) if main_topic_id else False
)
session.add(dt)
# Генерируем SEO если не предоставлено
if "seo" not in filtered_input and not draft.seo:
body_src = filtered_input.get("body", draft.body)
lead_src = filtered_input.get("lead", draft.lead)
body_html = wrap_html_fragment(body_src)
lead_html = wrap_html_fragment(lead_src)
try:
body_text = trafilatura.extract(body_html, include_comments=False, include_tables=False) if body_src else None
lead_text = trafilatura.extract(lead_html, include_comments=False, include_tables=False) if lead_src else None
body_teaser = generate_teaser(body_text, 300) if body_text else ""
filtered_input["seo"] = lead_text if lead_text else body_teaser
except Exception as e:
logger.warning(f"Failed to generate SEO for draft {draft_id}: {e}")
# Обновляем основные поля черновика
for key, value in filtered_input.items():
setattr(draft, key, value)
# Обновляем метаданные
draft.updated_at = int(time.time())
draft.updated_by = author_id
session.commit()
# Преобразуем объект в словарь для ответа
draft_dict = draft.dict()
draft_dict["topics"] = [topic.dict() for topic in draft.topics]
draft_dict["authors"] = [author.dict() for author in draft.authors]
# Добавляем объект автора в updated_by
draft_dict["updated_by"] = author_dict
return {"draft": draft_dict}
except Exception as e:
logger.error(f"Failed to update draft: {e}", exc_info=True)
return {"error": f"Failed to update draft: {str(e)}"}
@mutation.field("delete_draft") @mutation.field("delete_draft")
@@ -170,183 +340,136 @@ async def delete_draft(_, info, draft_id: int):
return {"draft": draft} return {"draft": draft}
def validate_html_content(html_content: str) -> tuple[bool, str]:
"""
Проверяет валидность HTML контента через trafilatura.
Args:
html_content: HTML строка для проверки
Returns:
tuple[bool, str]: (валидность, сообщение об ошибке)
Example:
>>> is_valid, error = validate_html_content("<p>Valid HTML</p>")
>>> is_valid
True
>>> error
''
>>> is_valid, error = validate_html_content("Invalid < HTML")
>>> is_valid
False
>>> 'Invalid HTML' in error
True
"""
if not html_content or not html_content.strip():
return False, "Content is empty"
try:
html_content = wrap_html_fragment(html_content)
extracted = trafilatura.extract(html_content)
if not extracted:
return False, "Invalid HTML structure or empty content"
return True, ""
except Exception as e:
logger.error(f"HTML validation error: {e}", exc_info=True)
return False, f"Invalid HTML content: {str(e)}"
@mutation.field("publish_draft") @mutation.field("publish_draft")
@login_required @login_required
async def publish_draft(_, info, draft_id: int): async def publish_draft(_, info, draft_id: int):
user_id = info.context.get("user_id") """
author_dict = info.context.get("author", {}) Публикует черновик, создавая новый Shout или обновляя существующий.
author_id = author_dict.get("id")
if not user_id or not author_id:
return {"error": "User ID and author ID are required"}
with local_session() as session:
draft = session.query(Draft).filter(Draft.id == draft_id).first()
if not draft:
return {"error": "Draft not found"}
shout = create_shout_from_draft(session, draft, author_id)
session.add(shout)
session.commit()
return {"shout": shout, "draft": draft}
@mutation.field("unpublish_draft")
@login_required
async def unpublish_draft(_, info, draft_id: int):
user_id = info.context.get("user_id")
author_dict = info.context.get("author", {})
author_id = author_dict.get("id")
if not user_id or not author_id:
return {"error": "User ID and author ID are required"}
with local_session() as session:
draft = session.query(Draft).filter(Draft.id == draft_id).first()
if not draft:
return {"error": "Draft not found"}
shout = session.query(Shout).filter(Shout.draft == draft.id).first()
if shout:
shout.published_at = None
session.commit()
return {"shout": shout, "draft": draft}
return {"error": "Failed to unpublish draft"}
@mutation.field("publish_shout")
@login_required
async def publish_shout(_, info, shout_id: int):
"""Publish draft as a shout or update existing shout.
Args: Args:
shout_id: ID существующей публикации или 0 для новой draft_id (int): ID черновика для публикации
draft: Объект черновика (опционально)
Returns:
dict: Результат публикации с shout или сообщением об ошибке
""" """
user_id = info.context.get("user_id") user_id = info.context.get("user_id")
author_dict = info.context.get("author", {}) author_dict = info.context.get("author", {})
author_id = author_dict.get("id") author_id = author_dict.get("id")
now = int(time.time())
if not user_id or not author_id: if not user_id or not author_id:
return {"error": "User ID and author ID are required"} return {"error": "Author ID is required"}
try: try:
with local_session() as session: with local_session() as session:
shout = session.query(Shout).filter(Shout.id == shout_id).first() # Загружаем черновик со всеми связями
if not shout: draft = (
return {"error": "Shout not found"} session.query(Draft)
was_published = shout.published_at is not None .options(
draft = session.query(Draft).where(Draft.id == shout.draft).first() joinedload(Draft.topics),
joinedload(Draft.authors),
joinedload(Draft.publication)
)
.filter(Draft.id == draft_id)
.first()
)
if not draft: if not draft:
return {"error": "Draft not found"} return {"error": "Draft not found"}
# Находим черновик если не передан
if not shout: # Проверка валидности HTML в body
shout = create_shout_from_draft(session, draft, author_id) is_valid, error = validate_html_content(draft.body)
else: if not is_valid:
return {"error": f"Cannot publish draft: {error}"}
# Проверяем, есть ли уже публикация для этого черновика
if draft.publication:
shout = draft.publication
# Обновляем существующую публикацию # Обновляем существующую публикацию
shout.draft = draft.id for field in ["body", "title", "subtitle", "lead", "cover", "cover_caption", "media", "lang", "seo"]:
shout.created_by = author_id if hasattr(draft, field):
shout.title = draft.title setattr(shout, field, getattr(draft, field))
shout.subtitle = draft.subtitle shout.updated_at = int(time.time())
shout.body = draft.body shout.updated_by = author_id
shout.cover = draft.cover else:
shout.cover_caption = draft.cover_caption # Создаем новую публикацию
shout.lead = draft.lead shout = create_shout_from_draft(session, draft, author_id)
shout.description = draft.description now = int(time.time())
shout.layout = draft.layout shout.created_at = now
shout.media = draft.media shout.published_at = now
shout.lang = draft.lang session.add(shout)
shout.seo = draft.seo session.flush() # Получаем ID нового шаута
draft.updated_at = now # Очищаем существующие связи
shout.updated_at = now session.query(ShoutAuthor).filter(ShoutAuthor.shout == shout.id).delete()
session.query(ShoutTopic).filter(ShoutTopic.shout == shout.id).delete()
# Устанавливаем published_at только если была ранее снята с публикации # Добавляем авторов
if not was_published: for author in (draft.authors or []):
shout.published_at = now sa = ShoutAuthor(shout=shout.id, author=author.id)
# Обрабатываем связи с авторами
if (
not session.query(ShoutAuthor)
.filter(and_(ShoutAuthor.shout == shout.id, ShoutAuthor.author == author_id))
.first()
):
sa = ShoutAuthor(shout=shout.id, author=author_id)
session.add(sa) session.add(sa)
# Обрабатываем темы # Добавляем темы
if draft.topics: for topic in (draft.topics or []):
for topic in draft.topics: st = ShoutTopic(
st = ShoutTopic( topic=topic.id,
topic=topic.id, shout=shout.id, main=topic.main if hasattr(topic, "main") else False shout=shout.id,
) main=topic.main if hasattr(topic, "main") else False
session.add(st) )
session.add(st)
session.add(shout)
session.add(draft)
session.flush()
# Инвалидируем кэш только если это новая публикация или была снята с публикации
if not was_published:
cache_keys = ["feed", f"author_{author_id}", "random_top", "unrated"]
# Добавляем ключи для тем
for topic in shout.topics:
cache_keys.append(f"topic_{topic.id}")
cache_keys.append(f"topic_shouts_{topic.id}")
await cache_by_id(Topic, topic.id, cache_topic)
# Инвалидируем кэш
await invalidate_shouts_cache(cache_keys)
await invalidate_shout_related_cache(shout, author_id)
# Обновляем кэш авторов
for author in shout.authors:
await cache_by_id(Author, author.id, cache_author)
# Отправляем уведомление о публикации
await notify_shout(shout.dict(), "published")
# Обновляем поисковый индекс
search_service.index(shout)
else:
# Для уже опубликованных материалов просто отправляем уведомление об обновлении
await notify_shout(shout.dict(), "update")
session.commit() session.commit()
# Инвалидируем кеш
invalidate_shouts_cache()
invalidate_shout_related_cache(shout.id)
# Уведомляем о публикации
await notify_shout(shout.id)
# Обновляем поисковый индекс
search_service.index_shout(shout)
logger.info(f"Successfully published shout #{shout.id} from draft #{draft_id}")
logger.debug(f"Shout data: {shout.dict()}")
return {"shout": shout} return {"shout": shout}
except Exception as e: except Exception as e:
logger.error(f"Failed to publish shout: {e}", exc_info=True) logger.error(f"Failed to publish draft {draft_id}: {e}", exc_info=True)
if "session" in locals(): return {"error": f"Failed to publish draft: {str(e)}"}
session.rollback()
return {"error": f"Failed to publish shout: {str(e)}"}
@mutation.field("unpublish_shout")
@login_required
async def unpublish_shout(_, info, shout_id: int):
"""Unpublish a shout.
Args:
shout_id: The ID of the shout to unpublish
Returns:
dict: The unpublished shout or an error message
"""
author_dict = info.context.get("author", {})
author_id = author_dict.get("id")
if not author_id:
return {"error": "Author ID is required"}
shout = None
with local_session() as session:
try:
shout = session.query(Shout).filter(Shout.id == shout_id).first()
shout.published_at = None
session.commit()
invalidate_shout_related_cache(shout)
invalidate_shouts_cache()
except Exception:
session.rollback()
return {"error": "Failed to unpublish shout"}
return {"shout": shout}

View File

@@ -1,8 +1,9 @@
import time import time
import orjson import orjson
import trafilatura
from sqlalchemy import and_, desc, select from sqlalchemy import and_, desc, select
from sqlalchemy.orm import joinedload from sqlalchemy.orm import joinedload, selectinload
from sqlalchemy.sql.functions import coalesce from sqlalchemy.sql.functions import coalesce
from cache.cache import ( from cache.cache import (
@@ -12,6 +13,7 @@ from cache.cache import (
invalidate_shouts_cache, invalidate_shouts_cache,
) )
from orm.author import Author from orm.author import Author
from orm.draft import Draft
from orm.shout import Shout, ShoutAuthor, ShoutTopic from orm.shout import Shout, ShoutAuthor, ShoutTopic
from orm.topic import Topic from orm.topic import Topic
from resolvers.follower import follow, unfollow from resolvers.follower import follow, unfollow
@@ -19,8 +21,9 @@ from resolvers.stat import get_with_stat
from services.auth import login_required from services.auth import login_required
from services.db import local_session from services.db import local_session
from services.notify import notify_shout from services.notify import notify_shout
from services.schema import query from services.schema import mutation, query
from services.search import search_service from services.search import search_service
from utils.html_wrapper import wrap_html_fragment
from utils.logger import root_logger as logger from utils.logger import root_logger as logger
@@ -176,9 +179,18 @@ async def create_shout(_, info, inp):
logger.info(f"Creating shout with input: {inp}") logger.info(f"Creating shout with input: {inp}")
# Создаем публикацию без topics # Создаем публикацию без topics
body = inp.get("body", "")
lead = inp.get("lead", "")
body_html = wrap_html_fragment(body)
lead_html = wrap_html_fragment(lead)
body_text = trafilatura.extract(body_html)
lead_text = trafilatura.extract(lead_html)
seo = inp.get("seo", lead_text.strip() or body_text.strip()[:300].split(". ")[:-1].join(". "))
new_shout = Shout( new_shout = Shout(
slug=slug, slug=slug,
body=inp.get("body", ""), body=body,
seo=seo,
lead=lead,
layout=inp.get("layout", "article"), layout=inp.get("layout", "article"),
title=inp.get("title", ""), title=inp.get("title", ""),
created_by=author_id, created_by=author_id,
@@ -380,7 +392,7 @@ def patch_topics(session, shout, topics_input):
# @login_required # @login_required
async def update_shout(_, info, shout_id: int, shout_input=None, publish=False): async def update_shout(_, info, shout_id: int, shout_input=None, publish=False):
logger.info(f"Starting update_shout with id={shout_id}, publish={publish}") logger.info(f"Starting update_shout with id={shout_id}, publish={publish}")
logger.debug(f"Full shout_input: {shout_input}") logger.debug(f"Full shout_input: {shout_input}") # DraftInput
user_id = info.context.get("user_id") user_id = info.context.get("user_id")
roles = info.context.get("roles", []) roles = info.context.get("roles", [])
@@ -637,39 +649,178 @@ def get_main_topic(topics):
"""Get the main topic from a list of ShoutTopic objects.""" """Get the main topic from a list of ShoutTopic objects."""
logger.info(f"Starting get_main_topic with {len(topics) if topics else 0} topics") logger.info(f"Starting get_main_topic with {len(topics) if topics else 0} topics")
logger.debug( logger.debug(
f"Topics data: {[(t.topic.slug if t.topic else 'no-topic', t.main) for t in topics] if topics else []}" f"Topics data: {[(t.slug, getattr(t, 'main', False)) for t in topics] if topics else []}"
) )
if not topics: if not topics:
logger.warning("No topics provided to get_main_topic") logger.warning("No topics provided to get_main_topic")
return {"id": 0, "title": "no topic", "slug": "notopic", "is_main": True} return {"id": 0, "title": "no topic", "slug": "notopic", "is_main": True}
# Find first main topic in original order # Проверяем, является ли topics списком объектов ShoutTopic или Topic
main_topic_rel = next((st for st in topics if st.main), None) if hasattr(topics[0], 'topic') and topics[0].topic:
logger.debug( # Для ShoutTopic объектов (старый формат)
f"Found main topic relation: {main_topic_rel.topic.slug if main_topic_rel and main_topic_rel.topic else None}" # Find first main topic in original order
) main_topic_rel = next((st for st in topics if getattr(st, 'main', False)), None)
logger.debug(
f"Found main topic relation: {main_topic_rel.topic.slug if main_topic_rel and main_topic_rel.topic else None}"
)
if main_topic_rel and main_topic_rel.topic: if main_topic_rel and main_topic_rel.topic:
result = { result = {
"slug": main_topic_rel.topic.slug, "slug": main_topic_rel.topic.slug,
"title": main_topic_rel.topic.title, "title": main_topic_rel.topic.title,
"id": main_topic_rel.topic.id, "id": main_topic_rel.topic.id,
"is_main": True, "is_main": True,
} }
logger.info(f"Returning main topic: {result}") logger.info(f"Returning main topic: {result}")
return result return result
# If no main found but topics exist, return first # If no main found but topics exist, return first
if topics and topics[0].topic: if topics and topics[0].topic:
logger.info(f"No main topic found, using first topic: {topics[0].topic.slug}") logger.info(f"No main topic found, using first topic: {topics[0].topic.slug}")
result = { result = {
"slug": topics[0].topic.slug, "slug": topics[0].topic.slug,
"title": topics[0].topic.title, "title": topics[0].topic.title,
"id": topics[0].topic.id, "id": topics[0].topic.id,
"is_main": True, "is_main": True,
} }
return result return result
else:
# Для Topic объектов (новый формат из selectinload)
# После смены на selectinload у нас просто список Topic объектов
if topics:
logger.info(f"Using first topic as main: {topics[0].slug}")
result = {
"slug": topics[0].slug,
"title": topics[0].title,
"id": topics[0].id,
"is_main": True,
}
return result
logger.warning("No valid topics found, returning default") logger.warning("No valid topics found, returning default")
return {"slug": "notopic", "title": "no topic", "id": 0, "is_main": True} return {"slug": "notopic", "title": "no topic", "id": 0, "is_main": True}
@mutation.field("unpublish_shout")
@login_required
async def unpublish_shout(_, info, shout_id: int):
"""Снимает публикацию (shout) с публикации.
Предзагружает связанный черновик (draft) и его авторов/темы, чтобы избежать
ошибок при последующем доступе к ним в GraphQL.
Args:
shout_id: ID публикации для снятия с публикации
Returns:
dict: Снятая с публикации публикация или сообщение об ошибке
"""
author_dict = info.context.get("author", {})
author_id = author_dict.get("id")
if not author_id:
# В идеале нужна проверка прав, имеет ли автор право снимать публикацию
return {"error": "Author ID is required"}
shout = None
with local_session() as session:
try:
# Загружаем Shout со всеми связями для правильного формирования ответа
shout = (
session.query(Shout)
.options(
joinedload(Shout.authors),
selectinload(Shout.topics)
)
.filter(Shout.id == shout_id)
.first()
)
if not shout:
logger.warning(f"Shout not found for unpublish: ID {shout_id}")
return {"error": "Shout not found"}
# Если у публикации есть связанный черновик, загружаем его с relationships
if shout.draft:
# Отдельно загружаем черновик с его связями
draft = (
session.query(Draft)
.options(
selectinload(Draft.authors),
selectinload(Draft.topics)
)
.filter(Draft.id == shout.draft)
.first()
)
# Связываем черновик с публикацией вручную для доступа через API
if draft:
shout.draft_obj = draft
# TODO: Добавить проверку прав доступа, если необходимо
# if author_id not in [a.id for a in shout.authors]: # Требует selectinload(Shout.authors) выше
# logger.warning(f"Author {author_id} denied unpublishing shout {shout_id}")
# return {"error": "Access denied"}
# Запоминаем старый slug и id для формирования поля publication
shout_slug = shout.slug
shout_id_for_publication = shout.id
# Снимаем с публикации (устанавливаем published_at в None)
shout.published_at = None
session.commit()
# Формируем полноценный словарь для ответа
shout_dict = shout.dict()
# Добавляем связанные данные
shout_dict["topics"] = (
[
{"id": topic.id, "slug": topic.slug, "title": topic.title}
for topic in shout.topics
]
if shout.topics
else []
)
# Добавляем main_topic
shout_dict["main_topic"] = get_main_topic(shout.topics)
# Добавляем авторов
shout_dict["authors"] = (
[
{"id": author.id, "name": author.name, "slug": author.slug}
for author in shout.authors
]
if shout.authors
else []
)
# Важно! Обновляем поле publication, отражая состояние "снят с публикации"
shout_dict["publication"] = {
"id": shout_id_for_publication,
"slug": shout_slug,
"published_at": None # Ключевое изменение - устанавливаем published_at в None
}
# Инвалидация кэша
try:
cache_keys = [
"feed", # лента
f"author_{author_id}", # публикации автора
"random_top", # случайные топовые
"unrated", # неоцененные
]
await invalidate_shout_related_cache(shout, author_id)
await invalidate_shouts_cache(cache_keys)
logger.info(f"Cache invalidated after unpublishing shout {shout_id}")
except Exception as cache_err:
logger.error(f"Failed to invalidate cache for unpublish shout {shout_id}: {cache_err}")
except Exception as e:
session.rollback()
logger.error(f"Failed to unpublish shout {shout_id}: {e}", exc_info=True)
return {"error": f"Failed to unpublish shout: {str(e)}"}
# Возвращаем сформированный словарь вместо объекта
logger.info(f"Shout {shout_id} unpublished successfully by author {author_id}")
return {"shout": shout_dict}

View File

@@ -67,30 +67,35 @@ def add_reaction_stat_columns(q):
return q return q
def get_reactions_with_stat(q, limit, offset): def get_reactions_with_stat(q, limit=10, offset=0):
""" """
Execute the reaction query and retrieve reactions with statistics. Execute the reaction query and retrieve reactions with statistics.
:param q: Query with reactions and statistics. :param q: Query with reactions and statistics.
:param limit: Number of reactions to load. :param limit: Number of reactions to load.
:param offset: Pagination offset. :param offset: Pagination offset.
:return: List of reactions. :return: List of reactions as dictionaries.
>>> get_reactions_with_stat(q, 10, 0) # doctest: +SKIP
[{'id': 1, 'body': 'Текст комментария', 'stat': {'rating': 5, 'comments_count': 3}, ...}]
""" """
q = q.limit(limit).offset(offset) q = q.limit(limit).offset(offset)
reactions = [] reactions = []
with local_session() as session: with local_session() as session:
result_rows = session.execute(q) result_rows = session.execute(q)
for reaction, author, shout, commented_stat, rating_stat in result_rows: for reaction, author, shout, comments_count, rating_stat in result_rows:
# Пропускаем реакции с отсутствующими shout или author # Пропускаем реакции с отсутствующими shout или author
if not shout or not author: if not shout or not author:
logger.error(f"Пропущена реакция из-за отсутствия shout или author: {reaction.dict()}") logger.error(f"Пропущена реакция из-за отсутствия shout или author: {reaction.dict()}")
continue continue
reaction.created_by = author.dict() # Преобразуем Reaction в словарь для доступа по ключу
reaction.shout = shout.dict() reaction_dict = reaction.dict()
reaction.stat = {"rating": rating_stat, "comments": commented_stat} reaction_dict["created_by"] = author.dict()
reactions.append(reaction) reaction_dict["shout"] = shout.dict()
reaction_dict["stat"] = {"rating": rating_stat, "comments_count": comments_count}
reactions.append(reaction_dict)
return reactions return reactions
@@ -393,7 +398,7 @@ async def update_reaction(_, info, reaction):
result = session.execute(reaction_query).unique().first() result = session.execute(reaction_query).unique().first()
if result: if result:
r, author, _shout, commented_stat, rating_stat = result r, author, _shout, comments_count, rating_stat = result
if not r or not author: if not r or not author:
return {"error": "Invalid reaction ID or unauthorized"} return {"error": "Invalid reaction ID or unauthorized"}
@@ -408,7 +413,7 @@ async def update_reaction(_, info, reaction):
session.commit() session.commit()
r.stat = { r.stat = {
"commented": commented_stat, "comments_count": comments_count,
"rating": rating_stat, "rating": rating_stat,
} }
@@ -482,12 +487,16 @@ def apply_reaction_filters(by, q):
shout_slug = by.get("shout") shout_slug = by.get("shout")
if shout_slug: if shout_slug:
q = q.filter(Shout.slug == shout_slug) q = q.filter(Shout.slug == shout_slug)
shout_id = by.get("shout_id")
if shout_id:
q = q.filter(Shout.id == shout_id)
shouts = by.get("shouts") shouts = by.get("shouts")
if shouts: if shouts:
q = q.filter(Shout.slug.in_(shouts)) q = q.filter(Shout.slug.in_(shouts))
created_by = by.get("created_by") created_by = by.get("created_by", by.get("author_id"))
if created_by: if created_by:
q = q.filter(Author.id == created_by) q = q.filter(Author.id == created_by)
@@ -612,24 +621,22 @@ async def load_shout_comments(_, info, shout: int, limit=50, offset=0):
@query.field("load_comment_ratings") @query.field("load_comment_ratings")
async def load_comment_ratings(_, info, comment: int, limit=50, offset=0): async def load_comment_ratings(_, info, comment: int, limit=50, offset=0):
""" """
Load ratings for a specified comment with pagination and statistics. Load ratings for a specified comment with pagination.
:param info: GraphQL context info. :param info: GraphQL context info.
:param comment: Comment ID. :param comment: Comment ID.
:param limit: Number of ratings to load. :param limit: Number of ratings to load.
:param offset: Pagination offset. :param offset: Pagination offset.
:return: List of reactions. :return: List of ratings.
""" """
q = query_reactions() q = query_reactions()
q = add_reaction_stat_columns(q)
# Filter, group, sort, limit, offset # Filter, group, sort, limit, offset
q = q.filter( q = q.filter(
and_( and_(
Reaction.deleted_at.is_(None), Reaction.deleted_at.is_(None),
Reaction.reply_to == comment, Reaction.reply_to == comment,
Reaction.kind == ReactionKind.COMMENT.value, Reaction.kind.in_(RATING_REACTIONS),
) )
) )
q = q.group_by(Reaction.id, Author.id, Shout.id) q = q.group_by(Reaction.id, Author.id, Shout.id)
@@ -637,3 +644,187 @@ async def load_comment_ratings(_, info, comment: int, limit=50, offset=0):
# Retrieve and return reactions # Retrieve and return reactions
return get_reactions_with_stat(q, limit, offset) return get_reactions_with_stat(q, limit, offset)
@query.field("load_comments_branch")
async def load_comments_branch(
_,
_info,
shout: int,
parent_id: int | None = None,
limit=10,
offset=0,
sort="newest",
children_limit=3,
children_offset=0,
):
"""
Загружает иерархические комментарии с возможностью пагинации корневых и дочерних.
:param info: GraphQL context info.
:param shout: ID статьи.
:param parent_id: ID родительского комментария (None для корневых).
:param limit: Количество комментариев для загрузки.
:param offset: Смещение для пагинации.
:param sort: Порядок сортировки ('newest', 'oldest', 'like').
:param children_limit: Максимальное количество дочерних комментариев.
:param children_offset: Смещение для дочерних комментариев.
:return: Список комментариев с дочерними.
"""
# Создаем базовый запрос
q = query_reactions()
q = add_reaction_stat_columns(q)
# Фильтруем по статье и типу (комментарии)
q = q.filter(
and_(
Reaction.deleted_at.is_(None),
Reaction.shout == shout,
Reaction.kind == ReactionKind.COMMENT.value,
)
)
# Фильтруем по родительскому ID
if parent_id is None:
# Загружаем только корневые комментарии
q = q.filter(Reaction.reply_to.is_(None))
else:
# Загружаем только прямые ответы на указанный комментарий
q = q.filter(Reaction.reply_to == parent_id)
# Сортировка и группировка
q = q.group_by(Reaction.id, Author.id, Shout.id)
# Определяем сортировку
order_by_stmt = None
if sort.lower() == "oldest":
order_by_stmt = asc(Reaction.created_at)
elif sort.lower() == "like":
order_by_stmt = desc("rating_stat")
else: # "newest" по умолчанию
order_by_stmt = desc(Reaction.created_at)
q = q.order_by(order_by_stmt)
# Выполняем запрос для получения комментариев
comments = get_reactions_with_stat(q, limit, offset)
# Если комментарии найдены, загружаем дочерние и количество ответов
if comments:
# Загружаем количество ответов для каждого комментария
await load_replies_count(comments)
# Загружаем дочерние комментарии
await load_first_replies(comments, children_limit, children_offset, sort)
return comments
async def load_replies_count(comments):
"""
Загружает количество ответов для списка комментариев и обновляет поле stat.comments_count.
:param comments: Список комментариев, для которых нужно загрузить количество ответов.
"""
if not comments:
return
comment_ids = [comment["id"] for comment in comments]
# Запрос для подсчета количества ответов
q = (
select(Reaction.reply_to.label("parent_id"), func.count().label("count"))
.where(
and_(
Reaction.reply_to.in_(comment_ids),
Reaction.deleted_at.is_(None),
Reaction.kind == ReactionKind.COMMENT.value,
)
)
.group_by(Reaction.reply_to)
)
# Выполняем запрос
with local_session() as session:
result = session.execute(q).fetchall()
# Создаем словарь {parent_id: count}
replies_count = {row[0]: row[1] for row in result}
# Добавляем значения в комментарии
for comment in comments:
if "stat" not in comment:
comment["stat"] = {}
# Обновляем счетчик комментариев в stat
comment["stat"]["comments_count"] = replies_count.get(comment["id"], 0)
async def load_first_replies(comments, limit, offset, sort="newest"):
"""
Загружает первые N ответов для каждого комментария.
:param comments: Список комментариев, для которых нужно загрузить ответы.
:param limit: Максимальное количество ответов для каждого комментария.
:param offset: Смещение для пагинации дочерних комментариев.
:param sort: Порядок сортировки ответов.
"""
if not comments or limit <= 0:
return
# Собираем ID комментариев
comment_ids = [comment["id"] for comment in comments]
# Базовый запрос для загрузки ответов
q = query_reactions()
q = add_reaction_stat_columns(q)
# Фильтрация: только ответы на указанные комментарии
q = q.filter(
and_(
Reaction.reply_to.in_(comment_ids),
Reaction.deleted_at.is_(None),
Reaction.kind == ReactionKind.COMMENT.value,
)
)
# Группировка
q = q.group_by(Reaction.id, Author.id, Shout.id)
# Определяем сортировку
order_by_stmt = None
if sort.lower() == "oldest":
order_by_stmt = asc(Reaction.created_at)
elif sort.lower() == "like":
order_by_stmt = desc("rating_stat")
else: # "newest" по умолчанию
order_by_stmt = desc(Reaction.created_at)
q = q.order_by(order_by_stmt, Reaction.reply_to)
# Выполняем запрос - указываем limit для неограниченного количества ответов
# но не более 100 на родительский комментарий
replies = get_reactions_with_stat(q, limit=100, offset=0)
# Группируем ответы по родительским ID
replies_by_parent = {}
for reply in replies:
parent_id = reply.get("reply_to")
if parent_id not in replies_by_parent:
replies_by_parent[parent_id] = []
replies_by_parent[parent_id].append(reply)
# Добавляем ответы к соответствующим комментариям с учетом смещения и лимита
for comment in comments:
comment_id = comment["id"]
if comment_id in replies_by_parent:
parent_replies = replies_by_parent[comment_id]
# Применяем смещение и лимит
comment["first_replies"] = parent_replies[offset : offset + limit]
else:
comment["first_replies"] = []
# Загружаем количество ответов для дочерних комментариев
all_replies = [reply for replies in replies_by_parent.values() for reply in replies]
if all_replies:
await load_replies_count(all_replies)

View File

@@ -10,7 +10,7 @@ from orm.shout import Shout, ShoutAuthor, ShoutTopic
from orm.topic import Topic from orm.topic import Topic
from services.db import json_array_builder, json_builder, local_session from services.db import json_array_builder, json_builder, local_session
from services.schema import query from services.schema import query
from services.search import search_text from services.search import search_text, get_search_count
from services.viewed import ViewedStorage from services.viewed import ViewedStorage
from utils.logger import root_logger as logger from utils.logger import root_logger as logger
@@ -187,12 +187,10 @@ def get_shouts_with_links(info, q, limit=20, offset=0):
""" """
shouts = [] shouts = []
try: try:
# logger.info(f"Starting get_shouts_with_links with limit={limit}, offset={offset}")
q = q.limit(limit).offset(offset) q = q.limit(limit).offset(offset)
with local_session() as session: with local_session() as session:
shouts_result = session.execute(q).all() shouts_result = session.execute(q).all()
# logger.info(f"Got {len(shouts_result) if shouts_result else 0} shouts from query")
if not shouts_result: if not shouts_result:
logger.warning("No shouts found in query result") logger.warning("No shouts found in query result")
@@ -203,7 +201,6 @@ def get_shouts_with_links(info, q, limit=20, offset=0):
shout = None shout = None
if hasattr(row, "Shout"): if hasattr(row, "Shout"):
shout = row.Shout shout = row.Shout
# logger.debug(f"Processing shout#{shout.id} at index {idx}")
if shout: if shout:
shout_id = int(f"{shout.id}") shout_id = int(f"{shout.id}")
shout_dict = shout.dict() shout_dict = shout.dict()
@@ -225,26 +222,22 @@ def get_shouts_with_links(info, q, limit=20, offset=0):
elif isinstance(row.stat, dict): elif isinstance(row.stat, dict):
stat = row.stat stat = row.stat
viewed = ViewedStorage.get_shout(shout_id=shout_id) or 0 viewed = ViewedStorage.get_shout(shout_id=shout_id) or 0
shout_dict["stat"] = {**stat, "viewed": viewed, "commented": stat.get("comments_count", 0)} shout_dict["stat"] = {**stat, "viewed": viewed}
# Обработка main_topic и topics # Обработка main_topic и topics
topics = None topics = None
if has_field(info, "topics") and hasattr(row, "topics"): if has_field(info, "topics") and hasattr(row, "topics"):
topics = orjson.loads(row.topics) if isinstance(row.topics, str) else row.topics topics = orjson.loads(row.topics) if isinstance(row.topics, str) else row.topics
# logger.debug(f"Shout#{shout_id} topics: {topics}")
shout_dict["topics"] = topics shout_dict["topics"] = topics
if has_field(info, "main_topic"): if has_field(info, "main_topic"):
main_topic = None main_topic = None
if hasattr(row, "main_topic"): if hasattr(row, "main_topic"):
# logger.debug(f"Raw main_topic for shout#{shout_id}: {row.main_topic}")
main_topic = ( main_topic = (
orjson.loads(row.main_topic) if isinstance(row.main_topic, str) else row.main_topic orjson.loads(row.main_topic) if isinstance(row.main_topic, str) else row.main_topic
) )
# logger.debug(f"Parsed main_topic for shout#{shout_id}: {main_topic}")
if not main_topic and topics and len(topics) > 0: if not main_topic and topics and len(topics) > 0:
# logger.info(f"No main_topic found for shout#{shout_id}, using first topic from list")
main_topic = { main_topic = {
"id": topics[0]["id"], "id": topics[0]["id"],
"title": topics[0]["title"], "title": topics[0]["title"],
@@ -252,10 +245,8 @@ def get_shouts_with_links(info, q, limit=20, offset=0):
"is_main": True, "is_main": True,
} }
elif not main_topic: elif not main_topic:
logger.warning(f"No main_topic and no topics found for shout#{shout_id}")
main_topic = {"id": 0, "title": "no topic", "slug": "notopic", "is_main": True} main_topic = {"id": 0, "title": "no topic", "slug": "notopic", "is_main": True}
shout_dict["main_topic"] = main_topic shout_dict["main_topic"] = main_topic
# logger.debug(f"Final main_topic for shout#{shout_id}: {main_topic}")
if has_field(info, "authors") and hasattr(row, "authors"): if has_field(info, "authors") and hasattr(row, "authors"):
shout_dict["authors"] = ( shout_dict["authors"] = (
@@ -282,7 +273,6 @@ def get_shouts_with_links(info, q, limit=20, offset=0):
logger.error(f"Fatal error in get_shouts_with_links: {e}", exc_info=True) logger.error(f"Fatal error in get_shouts_with_links: {e}", exc_info=True)
raise raise
finally: finally:
logger.info(f"Returning {len(shouts)} shouts from get_shouts_with_links")
return shouts return shouts
@@ -401,33 +391,49 @@ async def load_shouts_search(_, info, text, options):
""" """
limit = options.get("limit", 10) limit = options.get("limit", 10)
offset = options.get("offset", 0) offset = options.get("offset", 0)
if isinstance(text, str) and len(text) > 2: if isinstance(text, str) and len(text) > 2:
# Get search results with pagination
results = await search_text(text, limit, offset) results = await search_text(text, limit, offset)
scores = {}
hits_ids = [] if not results:
for sr in results: logger.info(f"No search results found for '{text}'")
shout_id = sr.get("id") return []
if shout_id:
shout_id = str(shout_id) # Extract IDs in the order from the search engine
scores[shout_id] = sr.get("score") hits_ids = [str(sr.get("id")) for sr in results if sr.get("id")]
hits_ids.append(shout_id)
q = ( # Query DB for only the IDs in the current page
query_with_stat(info) q = query_with_stat(info)
if has_field(info, "stat")
else select(Shout).filter(and_(Shout.published_at.is_not(None), Shout.deleted_at.is_(None)))
)
q = q.filter(Shout.id.in_(hits_ids)) q = q.filter(Shout.id.in_(hits_ids))
q = apply_filters(q, options) q = apply_filters(q, options.get("filters", {}))
q = apply_sorting(q, options)
shouts = get_shouts_with_links(info, q, limit, offset) shouts = get_shouts_with_links(info, q, len(hits_ids), 0)
for shout in shouts:
shout.score = scores[f"{shout.id}"] # Reorder shouts to match the order from hits_ids
shouts.sort(key=lambda x: x.score, reverse=True) shouts_dict = {str(shout['id']): shout for shout in shouts}
return shouts ordered_shouts = [shouts_dict[shout_id] for shout_id in hits_ids if shout_id in shouts_dict]
return ordered_shouts
return [] return []
@query.field("get_search_results_count")
async def get_search_results_count(_, info, text):
"""
Returns the total count of search results for a search query.
:param _: Root query object (unused)
:param info: GraphQL context information
:param text: Search query text
:return: Total count of results
"""
if isinstance(text, str) and len(text) > 2:
count = await get_search_count(text)
return {"count": count}
return {"count": 0}
@query.field("load_shouts_unrated") @query.field("load_shouts_unrated")
async def load_shouts_unrated(_, info, options): async def load_shouts_unrated(_, info, options):
""" """

View File

@@ -10,6 +10,7 @@ from cache.cache import (
) )
from orm.author import Author from orm.author import Author
from orm.topic import Topic from orm.topic import Topic
from orm.reaction import ReactionKind
from resolvers.stat import get_with_stat from resolvers.stat import get_with_stat
from services.auth import login_required from services.auth import login_required
from services.db import local_session from services.db import local_session
@@ -112,7 +113,7 @@ async def get_topics_with_stats(limit=100, offset=0, community_id=None, by=None)
shouts_stats_query = f""" shouts_stats_query = f"""
SELECT st.topic, COUNT(DISTINCT s.id) as shouts_count SELECT st.topic, COUNT(DISTINCT s.id) as shouts_count
FROM shout_topic st FROM shout_topic st
JOIN shout s ON st.shout = s.id AND s.deleted_at IS NULL JOIN shout s ON st.shout = s.id AND s.deleted_at IS NULL AND s.published_at IS NOT NULL
WHERE st.topic IN ({",".join(map(str, topic_ids))}) WHERE st.topic IN ({",".join(map(str, topic_ids))})
GROUP BY st.topic GROUP BY st.topic
""" """
@@ -121,12 +122,35 @@ async def get_topics_with_stats(limit=100, offset=0, community_id=None, by=None)
# Запрос на получение статистики по подписчикам для выбранных тем # Запрос на получение статистики по подписчикам для выбранных тем
followers_stats_query = f""" followers_stats_query = f"""
SELECT topic, COUNT(DISTINCT follower) as followers_count SELECT topic, COUNT(DISTINCT follower) as followers_count
FROM topic_followers FROM topic_followers tf
WHERE topic IN ({",".join(map(str, topic_ids))}) WHERE topic IN ({",".join(map(str, topic_ids))})
GROUP BY topic GROUP BY topic
""" """
followers_stats = {row[0]: row[1] for row in session.execute(text(followers_stats_query))} followers_stats = {row[0]: row[1] for row in session.execute(text(followers_stats_query))}
# Запрос на получение статистики авторов для выбранных тем
authors_stats_query = f"""
SELECT st.topic, COUNT(DISTINCT sa.author) as authors_count
FROM shout_topic st
JOIN shout s ON st.shout = s.id AND s.deleted_at IS NULL AND s.published_at IS NOT NULL
JOIN shout_author sa ON sa.shout = s.id
WHERE st.topic IN ({",".join(map(str, topic_ids))})
GROUP BY st.topic
"""
authors_stats = {row[0]: row[1] for row in session.execute(text(authors_stats_query))}
# Запрос на получение статистики комментариев для выбранных тем
comments_stats_query = f"""
SELECT st.topic, COUNT(DISTINCT r.id) as comments_count
FROM shout_topic st
JOIN shout s ON st.shout = s.id AND s.deleted_at IS NULL AND s.published_at IS NOT NULL
JOIN reaction r ON r.shout = s.id AND r.kind = '{ReactionKind.COMMENT.value}' AND r.deleted_at IS NULL
JOIN author a ON r.created_by = a.id AND a.deleted_at IS NULL
WHERE st.topic IN ({",".join(map(str, topic_ids))})
GROUP BY st.topic
"""
comments_stats = {row[0]: row[1] for row in session.execute(text(comments_stats_query))}
# Формируем результат с добавлением статистики # Формируем результат с добавлением статистики
result = [] result = []
for topic in topics: for topic in topics:
@@ -134,6 +158,8 @@ async def get_topics_with_stats(limit=100, offset=0, community_id=None, by=None)
topic_dict["stat"] = { topic_dict["stat"] = {
"shouts": shouts_stats.get(topic.id, 0), "shouts": shouts_stats.get(topic.id, 0),
"followers": followers_stats.get(topic.id, 0), "followers": followers_stats.get(topic.id, 0),
"authors": authors_stats.get(topic.id, 0),
"comments": comments_stats.get(topic.id, 0),
} }
result.append(topic_dict) result.append(topic_dict)
@@ -202,23 +228,6 @@ async def get_topics_all(_, _info):
return await get_all_topics() return await get_all_topics()
# Запрос на получение тем с пагинацией и статистикой
@query.field("get_topics_paginated")
async def get_topics_paginated(_, _info, limit=100, offset=0, by=None):
"""
Получает список тем с пагинацией и статистикой.
Args:
limit: Максимальное количество возвращаемых тем
offset: Смещение для пагинации
by: Опциональные параметры сортировки
Returns:
list: Список тем с их статистикой
"""
return await get_topics_with_stats(limit, offset, None, by)
# Запрос на получение тем по сообществу # Запрос на получение тем по сообществу
@query.field("get_topics_by_community") @query.field("get_topics_by_community")
async def get_topics_by_community(_, _info, community_id: int, limit=100, offset=0, by=None): async def get_topics_by_community(_, _info, community_id: int, limit=100, offset=0, by=None):

View File

@@ -33,7 +33,6 @@ input DraftInput {
main_topic_id: Int # Changed from main_topic: Topic main_topic_id: Int # Changed from main_topic: Topic
media: [MediaItemInput] # Changed to use MediaItemInput media: [MediaItemInput] # Changed to use MediaItemInput
lead: String lead: String
description: String
subtitle: String subtitle: String
lang: String lang: String
seo: String seo: String
@@ -93,12 +92,14 @@ input LoadShoutsOptions {
input ReactionBy { input ReactionBy {
shout: String shout: String
shout_id: Int
shouts: [String] shouts: [String]
search: String search: String
kinds: [ReactionKind] kinds: [ReactionKind]
reply_to: Int # filter reply_to: Int # filter
topic: String topic: String
created_by: Int created_by: Int
author_id: Int
author: String author: String
after: Int after: Int
sort: ReactionSort # sort sort: ReactionSort # sort

View File

@@ -4,7 +4,7 @@ type Query {
get_author_id(user: String!): Author get_author_id(user: String!): Author
get_authors_all: [Author] get_authors_all: [Author]
load_authors_by(by: AuthorsBy!, limit: Int, offset: Int): [Author] load_authors_by(by: AuthorsBy!, limit: Int, offset: Int): [Author]
# search_authors(what: String!): [Author] load_authors_search(text: String!, limit: Int, offset: Int): [Author!] # Search for authors by name or bio
# community # community
get_community: Community get_community: Community
@@ -26,10 +26,14 @@ type Query {
load_shout_ratings(shout: Int!, limit: Int, offset: Int): [Reaction] load_shout_ratings(shout: Int!, limit: Int, offset: Int): [Reaction]
load_comment_ratings(comment: Int!, limit: Int, offset: Int): [Reaction] load_comment_ratings(comment: Int!, limit: Int, offset: Int): [Reaction]
# branched comments pagination
load_comments_branch(shout: Int!, parent_id: Int, limit: Int, offset: Int, sort: ReactionSort, children_limit: Int, children_offset: Int): [Reaction]
# reader # reader
get_shout(slug: String, shout_id: Int): Shout get_shout(slug: String, shout_id: Int): Shout
load_shouts_by(options: LoadShoutsOptions): [Shout] load_shouts_by(options: LoadShoutsOptions): [Shout]
load_shouts_search(text: String!, options: LoadShoutsOptions): [SearchResult] load_shouts_search(text: String!, options: LoadShoutsOptions): [SearchResult]
get_search_results_count(text: String!): CountResult!
load_shouts_bookmarked(options: LoadShoutsOptions): [Shout] load_shouts_bookmarked(options: LoadShoutsOptions): [Shout]
# rating # rating
@@ -57,7 +61,7 @@ type Query {
get_topic(slug: String!): Topic get_topic(slug: String!): Topic
get_topics_all: [Topic] get_topics_all: [Topic]
get_topics_by_author(slug: String, user: String, author_id: Int): [Topic] get_topics_by_author(slug: String, user: String, author_id: Int): [Topic]
get_topics_by_community(slug: String, community_id: Int): [Topic] get_topics_by_community(community_id: Int!, limit: Int, offset: Int): [Topic]
# notifier # notifier
load_notifications(after: Int!, limit: Int, offset: Int): NotificationsResult! load_notifications(after: Int!, limit: Int, offset: Int): NotificationsResult!

View File

@@ -55,6 +55,7 @@ type Reaction {
stat: Stat stat: Stat
oid: String oid: String
# old_thread: String # old_thread: String
first_replies: [Reaction]
} }
type MediaItem { type MediaItem {
@@ -79,7 +80,6 @@ type Shout {
layout: String! layout: String!
lead: String lead: String
description: String
subtitle: String subtitle: String
lang: String lang: String
cover: String cover: String
@@ -99,6 +99,7 @@ type Shout {
featured_at: Int featured_at: Int
deleted_at: Int deleted_at: Int
seo: String # generated if not set
version_of: Shout # TODO: use version_of somewhere version_of: Shout # TODO: use version_of somewhere
draft: Draft draft: Draft
media: [MediaItem] media: [MediaItem]
@@ -106,17 +107,22 @@ type Shout {
score: Float score: Float
} }
type PublicationInfo {
id: Int!
slug: String!
published_at: Int
}
type Draft { type Draft {
id: Int! id: Int!
created_at: Int! created_at: Int!
created_by: Author! created_by: Author!
community: Community!
layout: String layout: String
slug: String slug: String
title: String title: String
subtitle: String subtitle: String
lead: String lead: String
description: String
body: String body: String
media: [MediaItem] media: [MediaItem]
cover: String cover: String
@@ -129,14 +135,14 @@ type Draft {
deleted_at: Int deleted_at: Int
updated_by: Author updated_by: Author
deleted_by: Author deleted_by: Author
authors: [Author] authors: [Author]!
topics: [Topic] topics: [Topic]!
publication: PublicationInfo
} }
type Stat { type Stat {
rating: Int rating: Int
commented: Int comments_count: Int
viewed: Int viewed: Int
last_commented_at: Int last_commented_at: Int
} }
@@ -207,6 +213,7 @@ type CommonResult {
} }
type SearchResult { type SearchResult {
id: Int!
slug: String! slug: String!
title: String! title: String!
cover: String cover: String
@@ -274,3 +281,7 @@ type MyRateComment {
my_rate: ReactionKind my_rate: ReactionKind
} }
type CountResult {
count: Int!
}

View File

@@ -19,7 +19,7 @@ from sqlalchemy import (
inspect, inspect,
text, text,
) )
from sqlalchemy.orm import Session, configure_mappers, declarative_base from sqlalchemy.orm import Session, configure_mappers, declarative_base, joinedload
from sqlalchemy.sql.schema import Table from sqlalchemy.sql.schema import Table
from settings import DB_URL from settings import DB_URL
@@ -259,3 +259,32 @@ def get_json_builder():
# Используем их в коде # Используем их в коде
json_builder, json_array_builder, json_cast = get_json_builder() json_builder, json_array_builder, json_cast = get_json_builder()
# Fetch all shouts, with authors preloaded
# This function is used for search indexing
async def fetch_all_shouts(session=None):
"""Fetch all published shouts for search indexing with authors preloaded"""
from orm.shout import Shout
close_session = False
if session is None:
session = local_session()
close_session = True
try:
# Fetch only published and non-deleted shouts with authors preloaded
query = session.query(Shout).options(
joinedload(Shout.authors)
).filter(
Shout.published_at.is_not(None),
Shout.deleted_at.is_(None)
)
shouts = query.all()
return shouts
except Exception as e:
logger.error(f"Error fetching shouts for search indexing: {e}")
return []
finally:
if close_session:
session.close()

View File

@@ -53,3 +53,66 @@ async def notify_follower(follower: dict, author_id: int, action: str = "follow"
except Exception as e: except Exception as e:
# Log the error and re-raise it # Log the error and re-raise it
logger.error(f"Failed to publish to channel {channel_name}: {e}") logger.error(f"Failed to publish to channel {channel_name}: {e}")
async def notify_draft(draft_data, action: str = "publish"):
"""
Отправляет уведомление о публикации или обновлении черновика.
Функция гарантирует, что данные черновика сериализуются корректно, включая
связанные атрибуты (topics, authors).
Args:
draft_data (dict): Словарь с данными черновика. Должен содержать минимум id и title
action (str, optional): Действие ("publish", "update"). По умолчанию "publish"
Returns:
None
Examples:
>>> draft = {"id": 1, "title": "Тестовый черновик", "slug": "test-draft"}
>>> await notify_draft(draft, "publish")
"""
channel_name = "draft"
try:
# Убеждаемся, что все необходимые данные присутствуют
# и объект не требует доступа к отсоединенным атрибутам
if isinstance(draft_data, dict):
draft_payload = draft_data
else:
# Если это ORM объект, преобразуем его в словарь с нужными атрибутами
draft_payload = {
"id": getattr(draft_data, "id", None),
"slug": getattr(draft_data, "slug", None),
"title": getattr(draft_data, "title", None),
"subtitle": getattr(draft_data, "subtitle", None),
"media": getattr(draft_data, "media", None),
"created_at": getattr(draft_data, "created_at", None),
"updated_at": getattr(draft_data, "updated_at", None)
}
# Если переданы связанные атрибуты, добавим их
if hasattr(draft_data, "topics") and draft_data.topics is not None:
draft_payload["topics"] = [
{"id": t.id, "name": t.name, "slug": t.slug}
for t in draft_data.topics
]
if hasattr(draft_data, "authors") and draft_data.authors is not None:
draft_payload["authors"] = [
{"id": a.id, "name": a.name, "slug": a.slug, "pic": getattr(a, "pic", None)}
for a in draft_data.authors
]
data = {"payload": draft_payload, "action": action}
# Сохраняем уведомление
save_notification(action, channel_name, data.get("payload"))
# Публикуем в Redis
json_data = orjson.dumps(data)
if json_data:
await redis.publish(channel_name, json_data)
except Exception as e:
logger.error(f"Failed to publish to channel {channel_name}: {e}")

View File

@@ -1,14 +1,15 @@
from asyncio.log import logger from asyncio.log import logger
import httpx import httpx
from ariadne import MutationType, QueryType from ariadne import MutationType, ObjectType, QueryType
from services.db import create_table_if_not_exists, local_session from services.db import create_table_if_not_exists, local_session
from settings import AUTH_URL from settings import AUTH_URL
query = QueryType() query = QueryType()
mutation = MutationType() mutation = MutationType()
resolvers = [query, mutation] type_draft = ObjectType("Draft")
resolvers = [query, mutation, type_draft]
async def request_graphql_data(gql, url=AUTH_URL, headers=None): async def request_graphql_data(gql, url=AUTH_URL, headers=None):
@@ -28,12 +29,19 @@ async def request_graphql_data(gql, url=AUTH_URL, headers=None):
async with httpx.AsyncClient() as client: async with httpx.AsyncClient() as client:
response = await client.post(url, json=gql, headers=headers) response = await client.post(url, json=gql, headers=headers)
if response.status_code == 200: if response.status_code == 200:
data = response.json() # Check if the response has content before parsing
errors = data.get("errors") if response.content and len(response.content.strip()) > 0:
if errors: try:
logger.error(f"{url} response: {data}") data = response.json()
errors = data.get("errors")
if errors:
logger.error(f"{url} response: {data}")
else:
return data
except Exception as json_err:
logger.error(f"JSON decode error: {json_err}, Response content: {response.text[:100]}")
else: else:
return data logger.error(f"{url}: Response is empty")
else: else:
logger.error(f"{url}: {response.status_code} {response.text}") logger.error(f"{url}: {response.status_code} {response.text}")
except Exception as _e: except Exception as _e:

File diff suppressed because it is too large Load Diff

View File

@@ -2,9 +2,7 @@ import asyncio
import os import os
import time import time
from datetime import datetime, timedelta, timezone from datetime import datetime, timedelta, timezone
from typing import Dict from typing import Dict, Optional
import orjson
# ga # ga
from google.analytics.data_v1beta import BetaAnalyticsDataClient from google.analytics.data_v1beta import BetaAnalyticsDataClient
@@ -20,33 +18,39 @@ from orm.author import Author
from orm.shout import Shout, ShoutAuthor, ShoutTopic from orm.shout import Shout, ShoutAuthor, ShoutTopic
from orm.topic import Topic from orm.topic import Topic
from services.db import local_session from services.db import local_session
from services.redis import redis
from utils.logger import root_logger as logger from utils.logger import root_logger as logger
GOOGLE_KEYFILE_PATH = os.environ.get("GOOGLE_KEYFILE_PATH", "/dump/google-service.json") GOOGLE_KEYFILE_PATH = os.environ.get("GOOGLE_KEYFILE_PATH", "/dump/google-service.json")
GOOGLE_PROPERTY_ID = os.environ.get("GOOGLE_PROPERTY_ID", "") GOOGLE_PROPERTY_ID = os.environ.get("GOOGLE_PROPERTY_ID", "")
VIEWS_FILEPATH = "/dump/views.json"
class ViewedStorage: class ViewedStorage:
"""
Класс для хранения и доступа к данным о просмотрах.
Использует Redis в качестве основного хранилища и Google Analytics для сбора новых данных.
"""
lock = asyncio.Lock() lock = asyncio.Lock()
precounted_by_slug = {}
views_by_shout = {} views_by_shout = {}
shouts_by_topic = {} shouts_by_topic = {}
shouts_by_author = {} shouts_by_author = {}
views = None views = None
period = 60 * 60 # каждый час period = 60 * 60 # каждый час
analytics_client: BetaAnalyticsDataClient | None = None analytics_client: Optional[BetaAnalyticsDataClient] = None
auth_result = None auth_result = None
running = False running = False
redis_views_key = None
last_update_timestamp = 0
start_date = datetime.now().strftime("%Y-%m-%d") start_date = datetime.now().strftime("%Y-%m-%d")
@staticmethod @staticmethod
async def init(): async def init():
"""Подключение к клиенту Google Analytics с использованием аутентификации""" """Подключение к клиенту Google Analytics и загрузка данных о просмотрах из Redis"""
self = ViewedStorage self = ViewedStorage
async with self.lock: async with self.lock:
# Загрузка предварительно подсчитанных просмотров из файла JSON # Загрузка предварительно подсчитанных просмотров из Redis
self.load_precounted_views() await self.load_views_from_redis()
os.environ.setdefault("GOOGLE_APPLICATION_CREDENTIALS", GOOGLE_KEYFILE_PATH) os.environ.setdefault("GOOGLE_APPLICATION_CREDENTIALS", GOOGLE_KEYFILE_PATH)
if GOOGLE_KEYFILE_PATH and os.path.isfile(GOOGLE_KEYFILE_PATH): if GOOGLE_KEYFILE_PATH and os.path.isfile(GOOGLE_KEYFILE_PATH):
@@ -62,40 +66,54 @@ class ViewedStorage:
self.running = False self.running = False
@staticmethod @staticmethod
def load_precounted_views(): async def load_views_from_redis():
"""Загрузка предварительно подсчитанных просмотров из файла JSON""" """Загрузка предварительно подсчитанных просмотров из Redis"""
self = ViewedStorage self = ViewedStorage
viewfile_path = VIEWS_FILEPATH
if not os.path.exists(viewfile_path):
viewfile_path = os.path.join(os.path.curdir, "views.json")
if not os.path.exists(viewfile_path):
logger.warning(" * views.json not found")
return
logger.info(f" * loading views from {viewfile_path}") # Подключаемся к Redis если соединение не установлено
try: if not redis._client:
start_date_int = os.path.getmtime(viewfile_path) await redis.connect()
start_date_str = datetime.fromtimestamp(start_date_int).strftime("%Y-%m-%d")
self.start_date = start_date_str # Получаем список всех ключей migrated_views_* и находим самый последний
keys = await redis.execute("KEYS", "migrated_views_*")
if not keys:
logger.warning(" * No migrated_views keys found in Redis")
return
# Фильтруем только ключи timestamp формата (исключаем migrated_views_slugs)
timestamp_keys = [k for k in keys if k != "migrated_views_slugs"]
if not timestamp_keys:
logger.warning(" * No migrated_views timestamp keys found in Redis")
return
# Сортируем по времени создания (в названии ключа) и берем последний
timestamp_keys.sort()
latest_key = timestamp_keys[-1]
self.redis_views_key = latest_key
# Получаем метку времени создания для установки start_date
timestamp = await redis.execute("HGET", latest_key, "_timestamp")
if timestamp:
self.last_update_timestamp = int(timestamp)
timestamp_dt = datetime.fromtimestamp(int(timestamp))
self.start_date = timestamp_dt.strftime("%Y-%m-%d")
# Если данные сегодняшние, считаем их актуальными
now_date = datetime.now().strftime("%Y-%m-%d") now_date = datetime.now().strftime("%Y-%m-%d")
if now_date == self.start_date: if now_date == self.start_date:
logger.info(" * views data is up to date!") logger.info(" * Views data is up to date!")
else: else:
logger.warn(f" * {viewfile_path} is too old: {self.start_date}") logger.warning(f" * Views data is from {self.start_date}, may need update")
with open(viewfile_path, "r") as file: # Выводим информацию о количестве загруженных записей
precounted_views = orjson.loads(file.read()) total_entries = await redis.execute("HGET", latest_key, "_total")
self.precounted_by_slug.update(precounted_views) if total_entries:
logger.info(f" * {len(precounted_views)} shouts with views was loaded.") logger.info(f" * {total_entries} shouts with views loaded from Redis key: {latest_key}")
except Exception as e:
logger.error(f"precounted views loading error: {e}")
# noinspection PyTypeChecker # noinspection PyTypeChecker
@staticmethod @staticmethod
async def update_pages(): async def update_pages():
"""Запрос всех страниц от Google Analytics, отсортрованных по количеству просмотров""" """Запрос всех страниц от Google Analytics, отсортированных по количеству просмотров"""
self = ViewedStorage self = ViewedStorage
logger.info(" ⎧ views update from Google Analytics ---") logger.info(" ⎧ views update from Google Analytics ---")
if self.running: if self.running:
@@ -140,15 +158,40 @@ class ViewedStorage:
self.running = False self.running = False
@staticmethod @staticmethod
def get_shout(shout_slug="", shout_id=0) -> int: async def get_shout(shout_slug="", shout_id=0) -> int:
"""Получение метрики просмотров shout по slug или id.""" """
Получение метрики просмотров shout по slug или id.
Args:
shout_slug: Slug публикации
shout_id: ID публикации
Returns:
int: Количество просмотров
"""
self = ViewedStorage self = ViewedStorage
# Получаем данные из Redis для новой схемы хранения
if not redis._client:
await redis.connect()
fresh_views = self.views_by_shout.get(shout_slug, 0) fresh_views = self.views_by_shout.get(shout_slug, 0)
precounted_views = self.precounted_by_slug.get(shout_slug, 0)
return fresh_views + precounted_views # Если есть id, пытаемся получить данные из Redis по ключу migrated_views_<timestamp>
if shout_id and self.redis_views_key:
precounted_views = await redis.execute("HGET", self.redis_views_key, str(shout_id))
if precounted_views:
return fresh_views + int(precounted_views)
# Если нет id или данных, пытаемся получить по slug из отдельного хеша
precounted_views = await redis.execute("HGET", "migrated_views_slugs", shout_slug)
if precounted_views:
return fresh_views + int(precounted_views)
return fresh_views
@staticmethod @staticmethod
def get_shout_media(shout_slug) -> Dict[str, int]: async def get_shout_media(shout_slug) -> Dict[str, int]:
"""Получение метрики воспроизведения shout по slug.""" """Получение метрики воспроизведения shout по slug."""
self = ViewedStorage self = ViewedStorage
@@ -157,23 +200,29 @@ class ViewedStorage:
return self.views_by_shout.get(shout_slug, 0) return self.views_by_shout.get(shout_slug, 0)
@staticmethod @staticmethod
def get_topic(topic_slug) -> int: async def get_topic(topic_slug) -> int:
"""Получение суммарного значения просмотров темы.""" """Получение суммарного значения просмотров темы."""
self = ViewedStorage self = ViewedStorage
return sum(self.views_by_shout.get(shout_slug, 0) for shout_slug in self.shouts_by_topic.get(topic_slug, [])) views_count = 0
for shout_slug in self.shouts_by_topic.get(topic_slug, []):
views_count += await self.get_shout(shout_slug=shout_slug)
return views_count
@staticmethod @staticmethod
def get_author(author_slug) -> int: async def get_author(author_slug) -> int:
"""Получение суммарного значения просмотров автора.""" """Получение суммарного значения просмотров автора."""
self = ViewedStorage self = ViewedStorage
return sum(self.views_by_shout.get(shout_slug, 0) for shout_slug in self.shouts_by_author.get(author_slug, [])) views_count = 0
for shout_slug in self.shouts_by_author.get(author_slug, []):
views_count += await self.get_shout(shout_slug=shout_slug)
return views_count
@staticmethod @staticmethod
def update_topics(shout_slug): def update_topics(shout_slug):
"""Обновление счетчиков темы по slug shout""" """Обновление счетчиков темы по slug shout"""
self = ViewedStorage self = ViewedStorage
with local_session() as session: with local_session() as session:
# Определение вспомогательной функции для избежа<EFBFBD><EFBFBD>ия повторения кода # Определение вспомогательной функции для избежания повторения кода
def update_groups(dictionary, key, value): def update_groups(dictionary, key, value):
dictionary[key] = list(set(dictionary.get(key, []) + [value])) dictionary[key] = list(set(dictionary.get(key, []) + [value]))

1
utils/__init__.py Normal file
View File

@@ -0,0 +1 @@

38
utils/html_wrapper.py Normal file
View File

@@ -0,0 +1,38 @@
"""
Модуль для обработки HTML-фрагментов
"""
def wrap_html_fragment(fragment: str) -> str:
"""
Оборачивает HTML-фрагмент в полную HTML-структуру для корректной обработки.
Args:
fragment: HTML-фрагмент для обработки
Returns:
str: Полный HTML-документ
Example:
>>> wrap_html_fragment("<p>Текст параграфа</p>")
'<!DOCTYPE html><html><head><meta charset="utf-8"></head><body><p>Текст параграфа</p></body></html>'
"""
if not fragment or not fragment.strip():
return fragment
# Проверяем, является ли контент полным HTML-документом
is_full_html = fragment.strip().startswith('<!DOCTYPE') or fragment.strip().startswith('<html')
# Если это фрагмент, оборачиваем его в полный HTML-документ
if not is_full_html:
return f"""<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title></title>
</head>
<body>
{fragment}
</body>
</html>"""
return fragment