Internet radio browser GUI for music/video streams from various directory services.

⌈⌋ branch:  streamtuner2


Check-in [0afbbc5c26]

Overview
Comment:Implement "buffy" mode for just keeping Xiphs YP.XML in memory once traversed.
Downloads: Tarball | ZIP archive | SQL archive
Timelines: family | ancestors | descendants | both | trunk
Files: files | file ages | folders
SHA1: 0afbbc5c263476a04dcc0e7ae74763d82bb40260
User & Date: mario on 2015-05-03 09:35:10
Other Links: manifest | tags
Context
2015-05-03
14:07
Remove print statement. check-in: cf6b582c7e user: mario tags: trunk
09:35
Implement "buffy" mode for just keeping Xiphs YP.XML in memory once traversed. check-in: 0afbbc5c26 user: mario tags: trunk
09:25
Consolidate bitrate filter in main update_streams() method. Fix conjoined category strings. check-in: 2793a3e6f8 user: mario tags: trunk
Changes

Modified channels/xiph.py from [73380fcbf4] to [6db1c9f004].

26
27
28
29
30
31
32
33

34
35
36
37
38
39
40
# The category list is hardwired in this plugin. And there
# are three station fetching modes now:
#
#  → "JSON cache" retrieves a refurbished JSON station list,
#    both sliceable genres and searchable.
#
#  → "Clunky XML" fetches the olden YP.XML, which is really
#    slow, then slices out genres. No search.

#
#  → "Forbidden Fruits" extracts from dir.xiph.org HTML pages,
#    with homepages and listener/max infos available. Search
#    is also possible.
#
# The bitrate filter can strip any low-quality entries, but
# retains `0` entries (which just lack meta information and







|
>







26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
# The category list is hardwired in this plugin. And there
# are three station fetching modes now:
#
#  → "JSON cache" retrieves a refurbished JSON station list,
#    both sliceable genres and searchable.
#
#  → "Clunky XML" fetches the olden YP.XML, which is really
#    slow, then slices out genres. No search. With the secret
#    "buffy" mode keeps all streams buffered.
#
#  → "Forbidden Fruits" extracts from dir.xiph.org HTML pages,
#    with homepages and listener/max infos available. Search
#    is also possible.
#
# The bitrate filter can strip any low-quality entries, but
# retains `0` entries (which just lack meta information and
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
      data = ahttp.get(self.json_url, params=params)
      #log.DATA(data)
      
      #-- extract
      l = []
      data = json.loads(data)
      for e in data:
          #log.DATA(e)
          if not len(l) or l[-1]["title"] != e["stream_name"]:
              l.append({
                "title": e["stream_name"],
                "url": e["listen_url"],
                "format": e["type"],
                "bitrate": int(e["bitrate"]),
                "genre": e["genre"],







<







114
115
116
117
118
119
120

121
122
123
124
125
126
127
      data = ahttp.get(self.json_url, params=params)
      #log.DATA(data)
      
      #-- extract
      l = []
      data = json.loads(data)
      for e in data:

          if not len(l) or l[-1]["title"] != e["stream_name"]:
              l.append({
                "title": e["stream_name"],
                "url": e["listen_url"],
                "format": e["type"],
                "bitrate": int(e["bitrate"]),
                "genre": e["genre"],
143
144
145
146
147
148
149

150

151
152
153
154
155
156
157
158
      # But it's a huge waste of memory to keep it around for unused
      # categories.  Extracting all streams{} at once would be worse. Yet
      # enabling this buffer method prevents partial reloading..
      if conf.xiph_source != "buffy":
          buffy = []

      # Get XML blob

      yp = ahttp.get(self.xml_url, statusmsg="Brace yourselves, still downloading the yp.xml blob.")

      log.DATA("returned")
      self.status("Yes, XML parsing isn't much faster either.", timeout=20)
      for entry in xml.dom.minidom.parseString(yp).getElementsByTagName("entry"):
          buffy.append({
              "title": x(entry, "server_name"),
              "url": x(entry, "listen_url"),
              "format": self.mime_fmt(x(entry, "server_type")[6:]),
              "bitrate": bitrate(x(entry, "bitrate")),







>
|
>
|







143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
      # But it's a huge waste of memory to keep it around for unused
      # categories.  Extracting all streams{} at once would be worse. Yet
      # enabling this buffer method prevents partial reloading..
      if conf.xiph_source != "buffy":
          buffy = []

      # Get XML blob
      if not buffy:
          yp = ahttp.get(self.xml_url, statusmsg="Brace yourselves, still downloading the yp.xml blob.")
      else:
          yp = "<none/>"
      self.status("Yes, XML parsing isn't much faster either.", timeout=20)
      for entry in xml.dom.minidom.parseString(yp).getElementsByTagName("entry"):
          buffy.append({
              "title": x(entry, "server_name"),
              "url": x(entry, "listen_url"),
              "format": self.mime_fmt(x(entry, "server_type")[6:]),
              "bitrate": bitrate(x(entry, "bitrate")),
166
167
168
169
170
171
172
173
174
175
176
177

178
179
180
181
182
183
184
185
186
          })
      self.status("This. Is. Happening. Now.")

      # Filter out a single subtree
      l = []
      if cat:
          rx = re.compile(cat.lower())
          l = []
          for row in buffy:
              if rx.search(row["genre"]):
                  l.append(row)


      elif search:
	      pass
        
      # Result category
      return l



  # Fetch directly from website. Which Xiph does not approve of; but







<
<
|
<

>

|







168
169
170
171
172
173
174


175

176
177
178
179
180
181
182
183
184
185
186
          })
      self.status("This. Is. Happening. Now.")

      # Filter out a single subtree
      l = []
      if cat:
          rx = re.compile(cat.lower())


          l = [row for row in buffy if rx.search(row["genre"])]


      # Search is actually no problem. Just don't want to. Nobody is using the YP.XML mode anyway..
      elif search:
          pass
        
      # Result category
      return l



  # Fetch directly from website. Which Xiph does not approve of; but