Skip to main content
Главная страница » Ice-hockey » Poprad (Slovakia)

Poprad Ice-Hockey Team: Slovakia's Top Roster & Stats Overview

Overview of Poprad Ice-Hockey Team

The Poprad ice-hockey team hails from Slovakia and competes in the Slovak Extraliga, the top tier of Slovakian ice hockey. Established in 1947, the team has a storied history and is currently managed by head coach Ján Hlavatý. Known for their dynamic play and strong fan support, Poprad remains a formidable competitor in the league.

Team History and Achievements

Poprad has an illustrious history with multiple league titles to its name. The team won its first championship in 1965 and has since secured several more titles, including notable victories in the late 20th century. They have consistently been contenders for the top positions, showcasing their prowess on the ice.

Current Squad and Key Players

The current squad boasts several key players who are pivotal to their success. Star forward Martin Bartek is renowned for his scoring ability, while defenseman Peter Čerešňák provides crucial defensive support. Goaltender Tomáš Košťál is another standout performer, known for his impressive save percentage.

Team Playing Style and Tactics

Poprad employs a fast-paced offensive strategy, often utilizing a 1-3-1 formation to maximize puck control and create scoring opportunities. Their strengths lie in their aggressive forechecking and quick transitions from defense to offense. However, they can be vulnerable to counterattacks due to occasional lapses in defensive coverage.

Interesting Facts and Unique Traits

The team is affectionately nicknamed “The Mountain Eagles,” reflecting their proud regional identity. Poprad’s fanbase is passionate and dedicated, often filling the arena with enthusiastic support. Rivalries with teams like HKm Zvolen add an extra layer of excitement to their matches.

Lists & Rankings of Players, Stats, or Performance Metrics

  • Martin Bartek: Top scorer 🎰
  • Peter Čerešňák: Defensive stalwart ✅
  • Tomáš Košťál: Goaltender with best save percentage 💡

Comparisons with Other Teams in the League or Division

Compared to other teams in the Slovak Extraliga, Poprad stands out for its balanced attack and solid defense. While teams like HK Dukla Trenčín may have stronger individual players, Poprad’s cohesive team play often gives them an edge in crucial games.

Case Studies or Notable Matches

A memorable match was their victory over HKm Zvolen in the 2019 season finale, which secured them a playoff spot. This game highlighted their resilience and strategic depth under pressure.

Tables Summarizing Team Stats, Recent Form, Head-to-Head Records, or Odds

Category Data
Recent Form (Last 5 Games) W-W-L-W-L
Head-to-Head Record vs HKm Zvolen (2023) P: 1 W: 0 L: 1 D: 0
Odds (Next Game) +150 (Win), +120 (Draw), +200 (Loss)

Tips & Recommendations for Analyzing the Team or Betting Insights

  • Analyze recent form trends before placing bets.
  • Closely watch player statistics like goals per game for key players such as Martin Bartek.
  • Evaluate head-to-head records against upcoming opponents for better betting decisions.

Quotes or Expert Opinions about the Team

“Poprad’s ability to adapt during games makes them unpredictable opponents,” says sports analyst Marek Novák.

Pros & Cons of the Team’s Current Form or Performance

  • ✅ Strong offensive capabilities led by top scorers.
  • ❌ Occasional defensive vulnerabilities that can be exploited by opponents.
  • ✅ Cohesive teamwork enhances performance under pressure.
  • ❌ Injuries to key players can disrupt team dynamics.</li <>: Hi there! I’m working on implementing a custom attention mechanism using PyTorch’s `nn.Module`, but I’m running into some issues that I can’t quite figure out. Here’s what my code looks like: python import torch import torch.nn as nn class Attention(nn.Module): def __init__(self, attention_size=512, batch_first=False, non_linearity=’tanh’): super(Attention, self).__init__() self.batch_first = batch_first if non_linearity == ‘relu’: activation = nn.ReLU() else: activation = nn.Tanh() self.attention_vector = nn.Parameter(torch.FloatTensor(attention_size)) self.attention_vector.data.uniform_(-0.01,.01) self.activation = activation def get_attention(self, inputs): if self.batch_first: inputs = inputs.permute(1,0,2) # inputs: [seq_len,batch_size,input_size] # scores: [batch_size,sentence_len] scores = self.activation(inputs.matmul(self.attention_vector)) scores = scores.squeeze() return scores def forward(self, inputs): scores = self.get_attention(inputs) weights = nn.functional.softmax(scores, dim=1) # weights: [batch_size,sentence_len] if self.batch_first: inputs = inputs.permute(1,0,2) # Here I mistakenly used ‘inputs’ instead of ‘inputs.permute(1,0,2)’ again weighted = torch.mul(inputs.permute(1,0), weights.unsqueeze(-1).expand_as(inputs)) # weighted: [seq_len,batch_size,input_size] # weighted_sum: [batch_size,input_size] And here’s the error traceback I keep getting: Traceback (most recent call last): File “attention.py”, line XX, in forward weighted = torch.mul(inputs.permute(1,0), weights.unsqueeze(-1).expand_as(inputs)) RuntimeError: The size of tensor a (10) must match the size of tensor b (20) at non-singleton dimension 0 I suspect there’s something wrong with how I’m handling dimensions when calculating `weighted`, but I’m not sure how to fix it. Could you help me understand what might be going wrong here? Thanks! <>: python It looks like you’re encountering a dimension mismatch issue when trying to multiply `inputs` with `weights`. The problem arises because you’re permuting `inputs` twice without reverting it back before using it in `torch.mul`. Here’s how you can fix it: In your `forward` method: python def forward(self, inputs): scores = self.get_attention(inputs) weights = nn.functional.softmax(scores.dim=1) if self.batch_first: inputs_permuted = inputs.permute(1,0,2) else: inputs_permuted = inputs weighted = torch.mul(inputs_permuted.unsqueeze(0), weights.unsqueeze(-1).unsqueeze(0).expand_as(inputs_permuted)) if not self.batch_first: weighted = weighted.squeeze(0) return weighted.sum(dim=0) # This sums over seq_len dimension resulting in [batch_size,input_size] Key changes: – Ensure `inputs` is permuted only once. – Use `unsqueeze` appropriately on both `inputs_permuted` and `weights` to match dimensions. – Sum over the sequence length dimension at the end. This should resolve your dimension mismatch error! 😊 <>: Thanks for the fix! Now that it’s working without errors, I’m thinking about optimizing it further. Specifically, I want to ensure that operations are as efficient as possible when dealing with large sequences. Do you have any suggestions on how I might improve memory usage or computational efficiency? Maybe there’s a way to handle batching more effectively? <>: To optimize your attention mechanism for efficiency with large sequences: 1. **Batch Processing**: Ensure that your data loader efficiently batches sequences of similar lengths together. This minimizes padding and reduces unnecessary computation. 2. **Use Efficient Data Types**: Consider using half-precision floats (`float16`) if your hardware supports it (like NVIDIA GPUs with Tensor Cores), which can reduce memory usage significantly. 3. **Avoid Unnecessary Permutations**: Minimize tensor permutations as they can be costly. Try structuring your code so that operations naturally align with your data layout. 4. **Leverage PyTorch Functions**: Use built-in PyTorch functions like `torch.bmm` for batch matrix multiplication instead of manually expanding dimensions. 5. **Profile Your Code**: Use PyTorch’s profiling tools (`torch.autograd.profiler`) to identify bottlenecks in your code. 6. **Gradient Checkpointing**: If memory is a constraint during training due to large models or sequences, consider using gradient checkpointing techniques to trade compute for memory. Implementing these strategies should help improve both memory usage and computational efficiency! 🚀 <>: Great tips! I’m particularly interested in minimizing tensor permutations and leveraging efficient data types like float16. Could you guide me on how to implement these changes? Also, any advice on using PyTorch functions like torch.bmm more effectively would be appreciated! <>: To minimize tensor permutations and leverage efficient data types: ### Minimizing Tensor Permutations Instead of permuting tensors frequently: – Structure your operations so they naturally align with your data layout. – For example, if your input is `[batch_size, seq_len, input_dim]`, try keeping operations within this layout whenever possible. ### Using Float16 To use half-precision floats: python model.half() # Convert model parameters to float16 inputs.half() # Convert input tensors to float16 # Ensure any operations involving float32 are converted back if necessary result.float() # Convert back if needed for certain operations ### Leveraging `torch.bmm` For efficient batch matrix multiplication: – Ensure your tensors are shaped correctly before using `torch.bmm`. Example: python # Assume queries is [batch_size x seq_len x hidden_dim] # keys/values are [batch_size x seq_len x hidden_dim] # Transpose keys/values for bmm compatibility keys_transposed = keys.transpose(1, 2) # [batch_size x hidden_dim x seq_len] # Perform batch matrix multiplication attention_scores = torch.bmm(queries.transpose(1, 2), keys_transposed) # Result shape will be [batch_size x seq_len x seq_len] # Apply softmax along the last dimension for attention weights attention_weights = nn.functional.softmax(attention_scores dim=-1) # Multiply by values tensor after transposing keys/values similarly if needed output_context_vector = torch.bmm(values.transpose(1 ,2), attention_weights.transpose(1 ,2)).transpose(1 ,2) By following these practices—aligning operations with data layout and using efficient data types—you’ll enhance performance while maintaining accuracy! ⚡️<>: Hi there, I’ve been working on implementing backward propagation through time (BPTT) combined with truncated BPTT using gradient clipping techniques within my RNN model training loop. Here’s my current implementation: python def train_with_bptt(model_char_filename_vec): def get_batch(data_x): “”” :param ids: :return: “”” x_batches=[] y_batches=[] num_batches=len(data_x)//args.BPTT_length+int(len(data_x)%args.BPTT_length!=0) assert len(data_x)%num_batches==0,axis=0,data_x.shape,num_batches,len(data_x) feature_array=np.zeros([args.BPTT_length,args.feature_num],dtype=np.float32) label_array=np.zeros([args.BPTT_length],dtype=np.int32) for i_batch in range(num_batches): slice_ids=i_batch*args.BPTT_length:(i_batch+1)*args.BPTT_length; feature_array[:, :]=data_x[slice_ids,:]; label_array[:]=data_x[slice_ids+args.prediction_window,:][:,-args.prediction_window:] x_batches.append(feature_array); y_batches.append(label_array); return x_batches,y_batches; def train_data(x,y): global_epoch_id+=1; global_step_id+=1; model.train(); paramerters=train_dataloader.create_dict_of_parameters(model=model); loss=train_dataloader.update(x,y,**paramerters); train_dataloader.backward(loss,model=model); speed=np.array(global_step_id)/time.time()-start_time; print(‘epoch:{},bid:{},loss:{:.4f} speed:{:.4f} s/iter’.format(global_epoch_id,batch_id+1,np.mean(loss.numpy()),speed)); def evaluate_data(x,y,tensorboard_log_dir=None,x_min_max=None,y_min_max=None,is_train_data=False): time_cost=time.time(); model.eval(); eval_paramerters=train_dataloader.create_dict_of_parameters(model=model,is_eval=True); loss=train_dataloader.update(x,y,**eval_paramerters); y_pred_np=prediction_output(y_min_max,model,x,min_prob_value=global_paramters[‘min_prob_value’],convert_to_type=np.float64); relative_error_np=calc_relative_error(y_pred_np,y_true=y,y_min_max=y_min_max); log_item={‘train_loss’:loss,’eval_loss’:loss,’train_relative_error’:relative_error_np,’eval_relative_error’:relative_error_np}; train_dataloader.log(log_item,numpy_type=True,tensorboard_log_dir=tensorboard_log_dir,x=x,min_prob_value=global_paramters[‘min_prob_value’]); print(‘epoch:{},bid:{},train_loss:{:.4f},eval_loss:{:.4f},train_relative_error{:.4f},eval_relative_error{:.4f},time_cost:{:.4f}s.’.format(global_epoch_id,batch_id+1,np.mean(loss.numpy()),np.mean(loss.numpy()),np.mean(relative_error_np),np.mean(relative_error_np),time.time()-time_cost)); print(‘training started.’); start_time=time.time(); path_checkpoint_saver=’/home/pj/data/rnn/get_daily_stock_data/checkpoints/saver_{}.ckpt’.format(args.experiment_name); path_tensorboard_logdir=’/home/pj/data/rnn/get_daily_stock_data/tensorboard_log_dir/{}’.format(args.experiment_name); path_saved_model_args=’/home/pj/data/rnn/get_daily_stock_data/saved_model_args.json’; max_train_data_date=args.max_train_data_date; min_train_data_date=args.min_train_data_date; max_test_data_date=max_train_data_date+datetime.timedelta(days=args.prediction_gap_days); min_test_data_date=min_train_data_date+datetime.timedelta(days=args.prediction_gap_days); assert min_test_data_date<=max_test_data_date,max_test_data_date,min_test_data_date; print('max date:',max_train_data_date,'min date:',min_train_data_date,'max test date:',max_test_data_date,'min test date:',min_test_data_date); stock_list=get_all_stock_list(min_init_price=args.min_init_price,max_init_price=args.max_init_price,max_change_rate=args.max_change_rate,min_change_rate=args.min_change_rate,max_history_dates=max(args.history_dates,args.prediction_gap_days)); print('total stocks count:',len(stock_list)); total_steps=int(len(stock_list)/args.test_every_n_samples)+int(len(stock_list)%args.test_every_n_samples!=0); stock_index_iterator=stock_index_generator(stock_list,total_steps,args.test_every_n_samples); # load previous model args. global global_paramters,model,rnn_layer_start_indexs,rnn_layer_count,rnn_output_all_layers,rnn_keep_prob_value,rnn_output_feature_dimension,gcn_feature_number,gcn_dropout_rate,gcn_regularization_rate,gcn_layer_counts,gcn_activations,gcn_layer_norms,gcn_coors_type,gcn_pool_types,gcn_support_counts,gates_summary_addition_method,dynamic_rnn_scope_prefix,dynamic_rnn_scope_suffixes,dynamic_rnn_reuse_flags,dynamic_rnn_initial_states,dynamic_rnn_initial_state_names,dynamic_rnn_zero_state_tensors,dynamic_rnn_sequence_lengths; global global_epoch_id,batch_id,start_time,last_save_time,last_eval_time,last_test_time,total_steps,test_total_steps,test_step,test_epoch_id,test_global_step_id,total_sample_count,test_total_sample_count,histories_output,histories_labels,prediction_datas,prediction_labels,prediction_output,true_prediction_datas,true_prediction_labels,true_prediction_output,true_prediction_relative_errors,true_prediction_average_errors,true_prediction_median_errors,true_prediction_rmsle_values,true_histories_output,true_histories_labels,max_return_rates,min_return_rates,histories_return_rates,prediction_return_rates,true_prediction_return_rates,moving_average_periods,moving_average_values,moving_average_returns,histories_ma_values,prediction_ma_values,true_prediction_ma_values,prediction_windows,prediction_window_features,histories_ewma_values,prediction_ewma_values,true_prediction_ewma_values; load_saved_params_from_json(path_saved_model_args=path_saved_model_args,default_model_path=path_checkpoint_saver,default_global_paramters={}); global_paramters['rnn_keep_prob_value']=rnn_keep_prob_value; histories_output=[];histories_labels=[]; prediction_datas=[];prediction_labels=[]; prediction_windows=predictions_creation(prediction_window_features=prediction_window_features); true_prediction_datas=[];true_prediction_labels=[];true_histories_output=[];true_histories_labels=[]; max_return_rates=[];min_return_rates=[]; histories_return_rates=[];prediction_return_rates=[]; true_prediction_return_rates=[]; moving_average_periods=moving_average_periods_creation(moving_average_periods=moving_average_periods,moving_average_features=moving_average_features); histories_ma_values=[];prediction_ma_values=[]; true_prediction_ma_values=[]; histories_ewma_values=[];prediction_ewma_values=[];true_prediction_ewma_values=[]; train_sample_count_by_epoch={} test_sample_count_by_epoch={} while True: rnn_layer_start_indexs=rnd.randint(rnd.randint(rnd.randint(rnd.randint(rnd.randint(rnd.randint(rnd.randint(rnd.randint(rnd.randint(rnd.randint(rnd.randint(3,rng_layers_count)),rng_layers_count)),rng_layers_count)),rng_layers_count)),rng_layers_count)),rng_layers_count); rnn_layer_count=rng_layers_count-rnn_layer_start_indexs; rnn_output_all_layers=bool(round(np.random.rand())); rnn_keep_prob_value=rng_keep_probs[rng_int(random_generator)]; gcn_feature_number=rng_gcn_out_features[rng_int(random_generator)]; gcn_dropout_rate=rng_dropouts[rng_int(random_generator)]; gcn_regularization_rate=rng_regularizations[rng_int(random_generator)]; gcn_layer_counts=[rnd.randint(min_gcn_layers,max_gcn_layers)for _inrange(gcn_layer_number)]; gcn_activations=[gcn_activations_options[gcn_activations_options_rng_ints[gci]]for gci,_inrange(gcn_layer_number)]; gcn_layer_norms=[gcn_normalization_options[glni]for glni,_inrange(gcn_layer_number)]; gatn_coors_type=gatn_coors_options[gatn_coors_options_rng_int]; gatn_pool_types=[gatn_pool_options[gpoti]for gpoti,_inrange(gatn_layer_number)]; gatn_support_counts=[rnd.choice(gatn_support_numbers)for _inrange(gatn_layer_number)]; gates_summary_addition_method=gates_summary_methods[gates_summary_methods_rng_int]; dynamic_rnn_scope_prefix='dynamic_rnn_'+str(uuid.uuid4()).replace('-',''); dynamic_rnn_scope_suffixes=['']+[str(uuid.uuid4()).replace('-','')for _inrange(dynamic_rnn_layers_count)]; dynamic_rnn_reuse_flags=['']+[True]*dynamic_rnn_layers_count; dynamic_rnn_initial_states=['']+[dynamic_initial_states[dini]for dini,_inrange(dynamic_rnn_layers_count)]; dynamic_rnn_initial_state_names=['']+[dynamic_initial_state_names[dins]for dins,_inrange(dynamic_rnn_layers_count)]; dynamic_rnn_zero_state_tensors=['']+[dynamic_zero_state_tensors[dzst]for dzst,_inrange(dynamic_rnn_layers_count)]; dynamic_sequence_lengths=dynamic_sequence_lengths_option[drsl]; ### create new model ### tf.reset_default_graph(); tf.set_random_seed(args.random_seed); random_generator.seed(args.random_seed); np.random.seed(args.random_seed); model=create_model(global_paramters['feature_num'], target_nums=target_nums, num_stocks=len(stock_list), use_zoneout=bool(round(np.random.rand())), zoneout_keep_probability=rng_zoneouts[rng_int(random_generator)], use_recurrent_dropout=bool(round(np.random.rand())), recurrent_dropout_probability=rng_dropouts[rng_int(random_generator)], use_input_dropout=bool(round(np.random.rand())), input_dropout_probability=rng_dropouts[rng_int(random_generator)], use_lstm=lstm_or_gru==LSTM_OR_GRU[0], use_bi_directional=bool(round(np.random.rand())), use_peepholes=lstm_peepholes==PEEPHOLE_ON_OR_OFF[0], cell_clip=float(rng_cell_clips[rng_int(random_generator)]), proj_clip=float(rng_proj_clips[rng_int(random_generator)]), num_unit_shards=num_unit_shards, num_proj_shards=num_proj_shards, forget_bias=float(rng_forget_biases[rng_int(random_generator)]), activation_fn=gates_activation_functions[gafoi], state_is_tuple=True, kernel_initializer=gates_initializer_methods[gim], bias_initializer=gates_bias_initializer_methods[bim], ): ### restore previous parameters ### restorer.restore(sess,path_checkpoint_saver+'/model-'+str(last_save_global_step)+'.cptk'); ### get training samples ### stock_df=get_stock_dataframe(stock_code=stock_code,min_init_price=min_init_price,max_init_price=max_init_price,min_change_rate=min_change_rate,max_change_rate=max_change_rate,max_history_dates=max_history_dates,min_history_dates=min_history_dates,start_unix=start_unix,end_unix=end_unix,scale_to_one_hundred=scale_to_one_hundred,output_folder=output_folder,output_file=output_file,is_training=is_training,is_testing=is_testing,is_debugging=is_debugging); stock_df.fillna(value=-9999.,inplace=True); assert not stock_df.isnull().values.any(),stock_df.isnull().sum().sum(),stock_df.isnull().any().any(),stock_df.shape,len(stock_df['code'].unique()); stock_df.sort_index(axis=0,inplace=True); df_length=len(stock_df.index.values.tolist()); feature_matrix=get_feature_matrix(feature_columns,column_means,column_stds,row_ids=df_length,output_folder=output_folder,output_file=output_file,is_training=is_training,is_testing=is_testing,is_debugging=is_debugging); features_and_target_matrix=get_features_and_target_matrix(target_column=target_column,row_ids=df_length,output_folder=output_folder,output_file=output_file,is_training=is_training,is_testing=is_testing,is_debugging=is_debugging); features_and_target_matrix.dropna(inplace=True,axis='index',how='any'); assert not features_and_target_matrix.isnull().values.any(),features_and_target_matrix.isnull().sum().sum(),features_and_target_matrix.isnull().any().any(),features_and_target_matrix.shape,len(features_and_target_matrix['code'].unique()); features_and_target_matrix.sort_index(axis=0,inplace=True); ### create batches ### X_batches,Y_batches=get_batch(features_and_target_matrix.iloc[:,:].values[:-prediction_window]); X_next_batch,Y_next_batch=get_batch(features_and_target_matrix.iloc[:,:].values[-prediction_window:].reshape([prediction_window]+list(features_and_target_matrix.iloc[:,:].values[:-prediction_window].shape)[1:])); ### training loop ### total_sample_in_this_epoch=len(X_batches)*len(X_batches[0]); sample_in_this_iteration=len(X_next_batch)*len(X_next_batch[0]); total_sample_in_this_iteration=total_sample_in_this_epoch+sample_in_this_iteration; total_epochs=int(total_sample_in_this_iteration/total_samples_per_epochs)+int(total_sample_in_this_iteration%total_samples_per_epochs!=0); epoch_iterator=random.sample(list(range(total_epochs)),total_epochs); shuffle_iterator=random.sample(list(range(len(X_batches))),len(X_batches)); step_per_epoch=int(total_samples_per_epochs/len(X_batches)); step_iterator=random.sample(list(range(step_per_epoch)),step_per_epoch); next_step=(global_step_id-total_sample_in_this_epoch)//sample_in_this_iteration; next_step=min(next_step,len(epoch_iterator)-epoch_id-epoch_offset,next_step+len(shuffle_iterator)-shuffle_id-step_offset,next_step+len(step_iterator)-step_id-step_offset); epoch_id+=next_step; shuffle_offset=(global_step_id-total_sample_in_this_epoch)//sample_in_this_iteration*len(shuffle_iterator)-shuffle_offset-next_step*len(shuffle_iterator)-shuffle_id; shuffle_offset%=len(shuffle_iterator); shuffle_offset=max(shuffle_offset,-len(shuffle_iterator)); step_offset=(global_step_id-total_sample_in_this_epoch)//sample_in_this_iteration*len(step_iterator)-step_offset-next_step*len(step_iterator)-step_id; step_offset%=len(step_iterator); step_offset=max(step_offset,-len(step_iterator)); epoch_indices=list(range(epoch_id+len(epoch_iterator))); shuffle_indices=list(range(len(shuffle_indices))); step_indices=list(range(len(step_indices))); random.shuffle(epoch_indices);random.shuffle(shuffle_indices);random.shuffle(step_indices); epoch_indices[:next_step]=epoch_iterator[:next_step];shuffle_indices[:next_step]=shuffle_iterator[:next_step];step_indices[:next_step]=step_iterator[:next_step]; epoch_ids=[eii+offset_eii%len(epoch_ids)for eii,eiiidexidnexidnexidnexidxoffset_eiiidnexidnexidxoffset_eiiidnexidxoffset_eiiidnexidxoffset_eiidexidxoffset_eiidexidxoedexidxoedexidxoeedexidxoeedexidxoeoeeexidnxoidnxoidnoeoidnoeoidnoeoednidnxodnoeidnoeidnoeoednidnoxodneoidneoidneooedinnoxodneoedinnoxodneoedinnoxodnedioendioxodnedioxodnedioendioendioxodnedioendioendiodenoediondiodenoediondiodenoediondioedoenniodonoeddionoeddionoeddionoeddionoedeonddoonodooneoodoonodooneoodoonodooneoodoonodooneooedoondoondooondoondooondoondooondoondoodonddonoendoonendoonendoonendoonendoonoendonddonoendonddonoendonddonoendonddonoendooneddonoendeddonoendeddoneededdoneededdoneededdoneededdoneneededdoneneededdoneneededdoneneedeneddoneddeneddoneddeneddoneddeneddoneddenedenedenedenedenedenedeoneeeeeddeoneeeeeddeoneeeeeddeoneeeeeddeenoeeneeenedeenoeeneeenedeenoeeneeenedeenoeeneeenedeenoenenenenenenenenenenenenennnenennnenennnenennnenenneennnenenneennnenenneennnenenneennnenenneenneennenenenenennenenenenennenenenenennenenenenennenenanenanenanenanenanenanenaanenaanenaanenaana'; shuffle_ids=[si+offset_sid%len(shuffle_ids)for si,sindexindexindexindexindexindexindexindexxoffset_sidindexxoffset_sidindexxoffset_sidindexxoffset_sidindexxoffset_sidinddexinddexinddexinddexinddexinddexindeyindxeyindxeyindxeyindxeyindxeyindyindxedyindyindxedyindyindyinydyidyidyidyidyidyidyiydyiydyiydyiydyiydyiydiydyydiydyydiydyydiydyydiydyydiydiyddydyddydyddydyddydyddydyddydyddydydyeedydyeedydyeedydyeedydyeedydyeedydeydeydeydeydeydeydeydedede'; step_ids=[si+offset_sidi%len(step_ids)for si,sindexxoxoxoxoxoxoxoxoxoxoxoxoxoxxoxoxoxoxofsidxxofsidxxofsidxxofsidxxofsidxxofsidoxsidoxsidoxsidoxsidoxsidoxsidosidosidosidosidosidosidosdosodosodosodosodosodosodosososoososoososoososoososoosososososososososososoos'; offset_eiidnexidxoffset_eiidnexidxoffset_eiidnexidxoffset_eiidnexidxoffset_eiidnexidxoffset_eiidnexixdxixdxixdxixdxixdxixdxixdxixdoxdxdxdxdxdxdxdxdxdxeieieieieieieieieieieieieiei'; offset_sidxindxedxinxedxinxedxinxedxinxedxinxedxinoxonxonxonxonxonxonxo'; offset_sidi=sidi-offset_sidi%len(sidsiidsiidsiidsiidsiidsiidsiidsiidsiidsisidisidisidisidisidisidisdisdisdisdisdisdisdidisdidisdidisdidisdidisdidisdiddisdiddisdiddisdiddisdiddisdiddississississississississisissssissssissssissssissssissss'; ## debug info ## ## debug info ## sys.stdout.write('b'*100+'r'+cur_line_string.format(epoch_counter//total_epochs,(epoch_counter%total_epochs)/total_epochs*(100.),num_completed_steps//steps_per_period,num_completed_steps%steps_per_period/steps_per_period*(100.),num_completed_steps//steps_per_period*num_epochs_trained,num_completed_steps//steps_per_period*num_samples_trained,time.time()-start_time,time.time()-last_save_time,time.time()-last_eval_time,time.time()-last_test_time,total_num_steps,num_completed_steps//steps_per_period*num_epochs_trained,num_completed_steps//steps_per_period*num_samples_trained)+'b'*100+'r'); sys.stdout.flush(); cur_line_string='repoch {}/{} [{:.02%}] ({}/{}) steps {}/{} epochs {}/{} samples [{:.02f}s<{:.02f}s,<{:02}:{:02}:{:02},{:+03}%], last save [{:.02f}s<{:.02f}s,], last eval [{:.02f}s<{:.02f}s,], last test [{:.02f}s<{:.02f}s,], lr {:.7f}’.format(epoch_counter//total_epochs,(epoch_counter%total_epochs)/total_epochs*(100.),num_completed_steps//steps_per_period,num_completed_steps%steps_per_period/steps_per_period*(100.),num_completed_steps//steps_per_period*num_epochs_trained,num_completed_steps//steps_per_period*num_samples_trained,time.time()-start_time,time.time()-last_save_time,time.strftime(‘%H:%M:%S’,time.gmtime(time.time()-last_save_time))if time.time()-last_save_time>=60 else ‘-:-:-‘,int((time.time()-last_save_time)*100/(save_interval_seconds))+if time.time()-last_save_time>=60 else -99,time.time()-last_eval_time,time.strftime(‘%H:%M:%S’,time.gmtime(time.time()-last_eval_time))if time.time()-last_eval_time>=60 else ‘-:-:-‘,int((time.time()-last_eval_time)*100/(test_interval_seconds))+if time.time()-last_eval_time>=60 else -99,time.time()-last_test_time,time.strftime(‘%H:%M:%S’,time.gmtime(time.time()-last_test_time))if time.time()-last_test_time>=60 else ‘-:-:-‘,int((time.time()-last_test_timestime-time.-lasdt-test_timetemest_interval_seconds))+if time.timestime-timetime.-lasdt-test_timetemest_interval_seconds>=60 else -99,np.power(lr_decay_factor,(num_completed_steps//(lr_decay_interval))))))))))))repoch {}/{} [{:.01%}] ({}/{}) steps {}/{} epochs {}/{} samples [{:.01}<{..01}], last save [{:<}] , last eval[{:<}], last test[{:<}], lr {:.7}'.format( epoch_counter//total_epochs,(epoch_counter%total_epochs)/total_epochs*(100.),num_completed_steps//steps_per_period,num_completed_steps%steps_per_period/steps_per_period*(100.),num_completed_steps//steps_per_period*num_epochs_trained,num_completed_steps//stepsperperiod*numsamplestrained,timenow-starttimetimenow-lastsavetimetimenow-lastevaltimetimenow-lasttesttimetimenow-lastsavetimetimenow-lastevaltimetimenow-lasttesttimelastsavetimespanformatted,lastevaltimespanformatted,lasttesttimespanformatted,np.power(lrdecayfactor,(numcompletedsteppdividesbylrdecayinterval))))))) sys.stdout.write('b'*100+'r'+curlinestring.format(epochcounter // totalepochs ,(epochcounter % totalepochs ) / totalepochs *(100.) , numcompletedstepp // steppserperiod , numcompletedstepp % steppserperiod / steppserperiod *(100.) , numcompletedstepp // steppserperiod * numepochstrained , numcompletedstepp // steppserperiod * numsamplestrained , timenow – starttimetimenow – lastsavetimetimenow – la stevaltimetimenow – la stesttimetimenow – lastsavetimetimenow – la stevaltimetimenow – la stesttimelastsavetimespanformatted,lastevaltimespanformatted,lasttesttimespanformatted,np.power(lrdecayfactor,(numcompletedsteppdividesbylrdecayinterval)))))))repoch {} / {} [{:.01 %}] ({}/{}) steps {}/{} epochs {}/{} samples [{ . . . }] , last save [] , last eval [] , last test [] , lr {:.7}’.format( epochcounter // totalepochs ,(epochcounter % totalepochs ) / totalepochs *(100.) , numcompletedste pp // ste pperperiod , numcompletedste pp % ste pperperiod / ste pperperiod *(100.) , numcomple tedst epps // ste pperpe riod * n umepochs trai ned , nu mcomple tedst epps // ste pperpe riod * nu msamplestra i ned , ti men ow – star tti men ow – las savi ti men ow – las steva li men ow – las stes ti men ow – las savi ti men ow – las steva li men ow – las stes tla ssavi te spanspan formatted , la sseva ltspa ns pa nformate d , la sts tes tspa ns pa nformate d , np . power(l r decay factor ,(nu mcompl et edst eps div id esby l r decay inter va l)))) sys.stdout.flush(); ## debug info ## ## debug info ## next_i=(shuffle_ids[shuffl eid]+step ids[shuffl eid][step ids[shuffl eid]])%(bu ts len(bu ts)); X=X batches[next_i];Y=Y batches[next_i]; sess.run(tf.global_variables_initializer()); sess.run(tf.local_variables_initializer()); sess.run(tf.tables_initializer()); sess.run(trainable_variables.initializer,name=’initialize_all_vars’); saver=tf.train.Saver(var_list=tf.global_variables()+tf.local_variables()+tf.tables()) restorer=tf.train.Saver(var_list=tf.global_variables()+tf.local_variables()+tf.tables()) sess.run(trainable_variables.initializer,name=’initialize_all_vars’); saver.save(sess,path_checkpoint_saver+’/model-‘+str(global Step Id)+’.cptk’); restorer.restore(sess,path_checkpoint_saver+’/model-‘+str(last Save Global Step)+’.cptk’); X batches Y batches=X batches[Y batches];Y batches=X batches[Y batches];X next batch Y next batch=X next batch[Y next batch];Y next batch=X next batch[Y next batch]; However when I run this code snippet it throws an error that leaves me puzzled: Traceback (most recent call last): … AssertionError Traceback (